AI pulse last 7 days
Daily AI pulse from YouTube, blogs, Reddit, HN. Ruthlessly filtered.
Sources (41)▶
- criticalAndrej Karpathy
Były dyrektor AI w Tesli, OpenAI cofounder. Każde video to gold.
- criticalAnthropic
Oficjalny kanał Anthropic. Każdy release Claude'a.
- criticalComfyUI Blog
Release log dla integracji ComfyUI — Luma Uni-1, GPT Image 2, ACE-Step music gen, Seedance. Pokrywa video+image+music+workflow.
- criticalOpenAI Blog
Oficjalny blog OpenAI. Wszystkie release.
- criticalSimon Willison's Weblog
Najlepszy 'thinker' AI. Codzienne posty, deep insights, niska hype rate.
- highAI Explained
Głęboka analiza papers i benchmarków, niska hype rate.
- highAI Jason
Praktyczne tutoriale Claude Code, MCP, workflow vibe codingu.
- highBen's Bites
Daily AI digest, creator-friendly tone. Codex, model releases, agentic AI.
- highCole Medin
Vibe coding + agentic workflows + Claude Code MCP integrations.
- highFal AI Blog
Fal hostuje większość nowych AI image/video modeli — ich blog to wczesne sygnały premier.
- highHN: 3D & Gaussian Splatting
HN signal dla 3D generative — Gaussian Splatting, NeRF, image-to-3D. Próg 20 bo niszowa kategoria (top historic 182pts).
- highHN: AI agents / MCP
HN posty o agentach, MCP, vibe codingu z min 100 pkt.
- highHN: Claude / Anthropic
HN posty z 'Claude' lub 'Anthropic' z min 100 pkt.
- highHugging Face Blog
Releases dla image, video, audio, 3D modeli. Część tech-heavy — Gemini relevance odfiltruje noise. Downgraded z critical: za duży volume na 'must-read' status.
- highIndyDevDan
Claude Code power user, prompty, hooki.
- highInterconnects (Nathan Lambert)
AI policy + research analysis. Niska hype rate, opinionated.
- highLatent Space
Podcast + blog Swyx — wywiady z founderami i deep dives engineeringowe.
- highMatt Wolfe
Comprehensive AI tools weekly digest. ~700K subs.
- highMatthew Berman
AI news, model release reviews, agent demos. Wysoki output.
- highr/aivideo
Community AI video — Sora, Veo, Runway, Kling, LTX. Co naprawdę zaskakuje twórców.
- highr/ClaudeAI
Społeczność Claude'a — power users, tipy, problemy.
- highr/LocalLLaMA
Open-source LLMs, lokalne uruchamianie, benchmarks bez hype.
- highr/StableDiffusion
Największa community open-source image gen (700k+ users). Premiery modeli, LoRA, ComfyUI workflows.
- highRiley Brown
Vibe coding, AI builder workflows, Cursor + Claude tutorials.
- highThe Decoder
Niemiecki AI news outlet po angielsku, dobre breaking news.
- highTheo - t3.gg
TypeScript + AI dev workflows. Hot takes, narrative-driven.
- highYannic Kilcher
Paper reviews i deep dives w research AI.
- lowAI Weirdness
Janelle Shane — playful AI experiments, image gen quirks. Niski volume, unikalna perspektywa.
- mediumbycloud
AI papers digestible — między 2MP a Yannic Kilcher.
- mediumCreative Bloq
Design industry — gdzie AI ingeruje w klasyczne dyscypliny graficzne.
- mediumFireship
100-sec format, often AI/LLM + tech news.
- mediumfxguide
VFX i film industry — coraz więcej AI w pipeline. Profesjonalna perspektywa.
- mediumGreg Isenberg
Solo founder vibe — buduje produkty z AI, podcasty z indie hackers.
- mediumr/ChatGPTCoding
Vibe coding tipy, IDE setupy, prompty. Mix wszystkich modeli.
- mediumr/comfyui
ComfyUI workflows — custom nodes, JSON workflows, optymalizacje.
- mediumr/midjourney
Midjourney community — premiery v7+, style references, prompt patterns.
- mediumr/runwayml
Runway-specific community — premiery features, prompt patterns, comparisons z konkurencją.
- mediumr/SunoAI
Suno music gen community — nowe wersje modelu, lyric prompting techniques. Audio AI ma slaby RSS ecosystem.
- mediumTina Huang
AI workflows for data science, practical applications.
- mediumTwo Minute Papers
Krótkie streszczenia papers AI, świetne dla szybkiego scan'a.
- mediumWes Roth
AI news z bardziej clickbaitowym tonem — filtr Gemini odsiewa hype.

Analysis of the 100 most popular hardware setups on Hugging Face
See which GPUs actually dominate the AI landscape, from enterprise A100s to the consumer RTX 4090s favored for local LLM execution.
Hugging Face CEO Clement Delangue released an analysis of the top 100 hardware configurations used on the platform. The data underscores NVIDIA's market capture, with the A100 and H100 leading for heavy workloads, while the RTX 3090 and 4090 remain the top choices for local enthusiasts. This report offers a factual look at the compute landscape, moving beyond hype to show what hardware is actually accessible to developers. It highlights the importance of VRAM capacity for running modern LLMs locally. For the creative-tech community, this serves as a benchmark for building and optimizing tools that fit the most common user profiles.
r/LocalLLaMA·news·05/06/2026, 04:35 PM·/u/clem59480Adding Benchmaxxer Repellant to the Open ASR Leaderboard
Hugging Face is cleaning up the Open ASR Leaderboard by using private test data to stop models from 'cheating' their way to the top.
Hugging Face has updated the Open ASR Leaderboard with a mechanism dubbed "Benchmaxxer Repellant" to combat benchmark gaming. The initiative addresses the growing issue of data contamination, where models are inadvertently or intentionally trained on public test sets. By introducing private, unseen evaluation datasets, the leaderboard can now provide a more accurate reflection of a model's generalization capabilities. This move ensures that top-ranking models actually perform better in real-world scenarios rather than just excelling at memorized benchmarks. It represents a shift towards more rigorous, verifiable standards in the open-source speech recognition community.
Hugging Face Blog·tooling·05/06/2026, 12:00 AM
Gemma 4 MTP released
Get up to 2x faster inference on Gemma 4 models using the newly released Multi-Token Prediction draft checkpoints for speculative decoding.
Google has officially released Multi-Token Prediction (MTP) draft models for the Gemma 4 family, including variants for the 31B and smaller models. These draft models are designed for Speculative Decoding, where a smaller model predicts multiple future tokens that the main model then validates in parallel. This technique can achieve up to 2x speedups in generation latency while maintaining identical output quality compared to standard autoregressive generation. The release includes specialized checkpoints on Hugging Face tuned as assistants for the main Gemma 4 weights. This is a significant update for local LLM users and on-device applications where inference speed is often the primary bottleneck.
r/LocalLLaMA·model_release·05/05/2026, 04:01 PM·rerri
Relevance auto-scored by LLM (0–10). List shows top 30 from the last 7 days.