AI pulse last 7 days
Daily AI pulse from YouTube, blogs, Reddit, HN. Ruthlessly filtered.
Sources (41)▶
- criticalAndrej Karpathy
Były dyrektor AI w Tesli, OpenAI cofounder. Każde video to gold.
- criticalAnthropic
Oficjalny kanał Anthropic. Każdy release Claude'a.
- criticalComfyUI Blog
Release log dla integracji ComfyUI — Luma Uni-1, GPT Image 2, ACE-Step music gen, Seedance. Pokrywa video+image+music+workflow.
- criticalOpenAI Blog
Oficjalny blog OpenAI. Wszystkie release.
- criticalSimon Willison's Weblog
Najlepszy 'thinker' AI. Codzienne posty, deep insights, niska hype rate.
- highAI Explained
Głęboka analiza papers i benchmarków, niska hype rate.
- highAI Jason
Praktyczne tutoriale Claude Code, MCP, workflow vibe codingu.
- highBen's Bites
Daily AI digest, creator-friendly tone. Codex, model releases, agentic AI.
- highCole Medin
Vibe coding + agentic workflows + Claude Code MCP integrations.
- highFal AI Blog
Fal hostuje większość nowych AI image/video modeli — ich blog to wczesne sygnały premier.
- highHN: 3D & Gaussian Splatting
HN signal dla 3D generative — Gaussian Splatting, NeRF, image-to-3D. Próg 20 bo niszowa kategoria (top historic 182pts).
- highHN: AI agents / MCP
HN posty o agentach, MCP, vibe codingu z min 100 pkt.
- highHN: Claude / Anthropic
HN posty z 'Claude' lub 'Anthropic' z min 100 pkt.
- highHugging Face Blog
Releases dla image, video, audio, 3D modeli. Część tech-heavy — Gemini relevance odfiltruje noise. Downgraded z critical: za duży volume na 'must-read' status.
- highIndyDevDan
Claude Code power user, prompty, hooki.
- highInterconnects (Nathan Lambert)
AI policy + research analysis. Niska hype rate, opinionated.
- highLatent Space
Podcast + blog Swyx — wywiady z founderami i deep dives engineeringowe.
- highMatt Wolfe
Comprehensive AI tools weekly digest. ~700K subs.
- highMatthew Berman
AI news, model release reviews, agent demos. Wysoki output.
- highr/aivideo
Community AI video — Sora, Veo, Runway, Kling, LTX. Co naprawdę zaskakuje twórców.
- highr/ClaudeAI
Społeczność Claude'a — power users, tipy, problemy.
- highr/LocalLLaMA
Open-source LLMs, lokalne uruchamianie, benchmarks bez hype.
- highr/StableDiffusion
Największa community open-source image gen (700k+ users). Premiery modeli, LoRA, ComfyUI workflows.
- highRiley Brown
Vibe coding, AI builder workflows, Cursor + Claude tutorials.
- highThe Decoder
Niemiecki AI news outlet po angielsku, dobre breaking news.
- highTheo - t3.gg
TypeScript + AI dev workflows. Hot takes, narrative-driven.
- highYannic Kilcher
Paper reviews i deep dives w research AI.
- lowAI Weirdness
Janelle Shane — playful AI experiments, image gen quirks. Niski volume, unikalna perspektywa.
- mediumbycloud
AI papers digestible — między 2MP a Yannic Kilcher.
- mediumCreative Bloq
Design industry — gdzie AI ingeruje w klasyczne dyscypliny graficzne.
- mediumFireship
100-sec format, often AI/LLM + tech news.
- mediumfxguide
VFX i film industry — coraz więcej AI w pipeline. Profesjonalna perspektywa.
- mediumGreg Isenberg
Solo founder vibe — buduje produkty z AI, podcasty z indie hackers.
- mediumr/ChatGPTCoding
Vibe coding tipy, IDE setupy, prompty. Mix wszystkich modeli.
- mediumr/comfyui
ComfyUI workflows — custom nodes, JSON workflows, optymalizacje.
- mediumr/midjourney
Midjourney community — premiery v7+, style references, prompt patterns.
- mediumr/runwayml
Runway-specific community — premiery features, prompt patterns, comparisons z konkurencją.
- mediumr/SunoAI
Suno music gen community — nowe wersje modelu, lyric prompting techniques. Audio AI ma slaby RSS ecosystem.
- mediumTina Huang
AI workflows for data science, practical applications.
- mediumTwo Minute Papers
Krótkie streszczenia papers AI, świetne dla szybkiego scan'a.
- mediumWes Roth
AI news z bardziej clickbaitowym tonem — filtr Gemini odsiewa hype.
Qwen3.6 27B uncensored heretic v2 Native MTP Preserved is Out Now With KLD 0.0021, 6/100 Refusals and the Full 15 MTPs Preserved and Retained, Available in Safetensors, GGUFs and NVFP4s formats.
A high-performance, uncensored 27B model that successfully retains advanced Multi-Token Prediction (MTP) features for better local inference.
LLMFan46 has released 'heretic v2', an uncensored fine-tune of the Qwen3.6 27B model. This release is notable for preserving all 15 native Multi-Token Prediction (MTP) modules, which are frequently lost or degraded during the fine-tuning process. The model achieves a very low Kullback–Leibler divergence (KLD) of 0.0021, suggesting it maintains the original model's reasoning capabilities while eliminating refusals. With a refusal rate of only 6%, it is optimized for unrestricted local use. The model is available in multiple formats including Safetensors, GGUF, and NVFP4 to support various hardware setups.
r/LocalLLaMA·model_release·05/07/2026, 02:59 AM·/u/LLMFan46
Ernie Image Lora training - my take
Practical insights and visual benchmarks for training LoRAs on the Ernie model, highlighting necessary adjustments to standard training workflows.
The author presents their findings and visual results from training a LoRA on the Ernie image model, a less common alternative to the Stable Diffusion ecosystem. The post includes specific technical insights into the training process, highlighting how hyperparameters like learning rate and rank need adjustment compared to standard SDXL workflows. Visual benchmarks provided via Imgur demonstrate the model's proficiency in handling complex architectural details and specific artistic styles. This contribution is particularly valuable for users looking to diversify their toolkit beyond mainstream models and understand the nuances of cross-architecture fine-tuning. It serves as both a technical guide and a proof-of-concept for the Ernie model's flexibility.
r/StableDiffusion·tutorial·05/06/2026, 10:53 PM·/u/malcolmreySolidity LM surpasses Opus
A new 27B local model specifically fine-tuned for Solidity claims to outperform Claude Opus in smart contract coding benchmarks.
Developer /u/swingbear has released Qwen3.6-Solidity-27B, a fine-tuned model specifically optimized for the Solidity programming language. According to the author, the model achieved a higher pass@1 score on the 'soleval' benchmark compared to Claude Opus 4.7. This 27B parameter model represents a significant achievement for local LLMs in specialized coding tasks, outperforming a much larger frontier model in a niche domain. The project involved substantial compute investment to bridge the gap between general-purpose models and domain-specific tools. The model is currently available on HuggingFace for testing and community feedback.
r/LocalLLaMA·model_release·05/06/2026, 06:59 AM·/u/swingbear
Amazon brings agentic fine-tuning to SageMaker with support for Llama, Qwen, Deepseek, and Nova
Amazon SageMaker now offers an AI agent to automate and simplify the fine-tuning process for popular open-source models like Llama and Deepseek.
Amazon has updated SageMaker AI to include agentic fine-tuning, a feature designed to streamline the model customization process. This new AI agent assists developers in selecting hyperparameters and managing the training workflow for various LLMs. Supported models include Meta's Llama, Alibaba's Qwen, Deepseek, and Amazon's own Nova series. The goal is to lower the barrier for creating specialized models tailored for specific agentic tasks. By automating complex parts of the fine-tuning pipeline, AWS aims to make high-performance model adaptation more accessible to a broader range of developers.
The Decoder·tooling·05/05/2026, 10:08 AM·Maximilian Schreiner
Relevance auto-scored by LLM (0–10). List shows top 30 from the last 7 days.