AI pulse last 7 days
Daily AI pulse from YouTube, blogs, Reddit, HN. Ruthlessly filtered.
Sources (41)▶
- criticalAndrej Karpathy
Były dyrektor AI w Tesli, OpenAI cofounder. Każde video to gold.
- criticalAnthropic
Oficjalny kanał Anthropic. Każdy release Claude'a.
- criticalComfyUI Blog
Release log dla integracji ComfyUI — Luma Uni-1, GPT Image 2, ACE-Step music gen, Seedance. Pokrywa video+image+music+workflow.
- criticalOpenAI Blog
Oficjalny blog OpenAI. Wszystkie release.
- criticalSimon Willison's Weblog
Najlepszy 'thinker' AI. Codzienne posty, deep insights, niska hype rate.
- highAI Explained
Głęboka analiza papers i benchmarków, niska hype rate.
- highAI Jason
Praktyczne tutoriale Claude Code, MCP, workflow vibe codingu.
- highBen's Bites
Daily AI digest, creator-friendly tone. Codex, model releases, agentic AI.
- highCole Medin
Vibe coding + agentic workflows + Claude Code MCP integrations.
- highFal AI Blog
Fal hostuje większość nowych AI image/video modeli — ich blog to wczesne sygnały premier.
- highHN: 3D & Gaussian Splatting
HN signal dla 3D generative — Gaussian Splatting, NeRF, image-to-3D. Próg 20 bo niszowa kategoria (top historic 182pts).
- highHN: AI agents / MCP
HN posty o agentach, MCP, vibe codingu z min 100 pkt.
- highHN: Claude / Anthropic
HN posty z 'Claude' lub 'Anthropic' z min 100 pkt.
- highHugging Face Blog
Releases dla image, video, audio, 3D modeli. Część tech-heavy — Gemini relevance odfiltruje noise. Downgraded z critical: za duży volume na 'must-read' status.
- highIndyDevDan
Claude Code power user, prompty, hooki.
- highInterconnects (Nathan Lambert)
AI policy + research analysis. Niska hype rate, opinionated.
- highLatent Space
Podcast + blog Swyx — wywiady z founderami i deep dives engineeringowe.
- highMatt Wolfe
Comprehensive AI tools weekly digest. ~700K subs.
- highMatthew Berman
AI news, model release reviews, agent demos. Wysoki output.
- highr/aivideo
Community AI video — Sora, Veo, Runway, Kling, LTX. Co naprawdę zaskakuje twórców.
- highr/ClaudeAI
Społeczność Claude'a — power users, tipy, problemy.
- highr/LocalLLaMA
Open-source LLMs, lokalne uruchamianie, benchmarks bez hype.
- highr/StableDiffusion
Największa community open-source image gen (700k+ users). Premiery modeli, LoRA, ComfyUI workflows.
- highRiley Brown
Vibe coding, AI builder workflows, Cursor + Claude tutorials.
- highThe Decoder
Niemiecki AI news outlet po angielsku, dobre breaking news.
- highTheo - t3.gg
TypeScript + AI dev workflows. Hot takes, narrative-driven.
- highYannic Kilcher
Paper reviews i deep dives w research AI.
- lowAI Weirdness
Janelle Shane — playful AI experiments, image gen quirks. Niski volume, unikalna perspektywa.
- mediumbycloud
AI papers digestible — między 2MP a Yannic Kilcher.
- mediumCreative Bloq
Design industry — gdzie AI ingeruje w klasyczne dyscypliny graficzne.
- mediumFireship
100-sec format, often AI/LLM + tech news.
- mediumfxguide
VFX i film industry — coraz więcej AI w pipeline. Profesjonalna perspektywa.
- mediumGreg Isenberg
Solo founder vibe — buduje produkty z AI, podcasty z indie hackers.
- mediumr/ChatGPTCoding
Vibe coding tipy, IDE setupy, prompty. Mix wszystkich modeli.
- mediumr/comfyui
ComfyUI workflows — custom nodes, JSON workflows, optymalizacje.
- mediumr/midjourney
Midjourney community — premiery v7+, style references, prompt patterns.
- mediumr/runwayml
Runway-specific community — premiery features, prompt patterns, comparisons z konkurencją.
- mediumr/SunoAI
Suno music gen community — nowe wersje modelu, lyric prompting techniques. Audio AI ma slaby RSS ecosystem.
- mediumTina Huang
AI workflows for data science, practical applications.
- mediumTwo Minute Papers
Krótkie streszczenia papers AI, świetne dla szybkiego scan'a.
- mediumWes Roth
AI news z bardziej clickbaitowym tonem — filtr Gemini odsiewa hype.

DeepSeek nears $45bn valuation as China’s ‘Big Fund’ leads investment talks
DeepSeek is securing $45B in funding, ensuring they remain a dominant force in the open-weights LLM space for the foreseeable future.
DeepSeek, the developer of the highly efficient V3 and R1 models, is reportedly in talks for its first major investment round that could value the company at $45 billion. The funding is expected to be led by China’s National Integrated Circuit Industry Investment Fund, known as the 'Big Fund.' This move marks a significant shift as DeepSeek, previously funded by high-frequency trading firm High-Flyer Quant, seeks massive capital to scale its compute resources. The valuation would place DeepSeek among the world's most valuable AI startups, rivaling US-based giants like Anthropic. For the local LLM community, this suggests a long-term commitment to developing state-of-the-art models that often challenge proprietary alternatives.
r/LocalLLaMA·news·05/07/2026, 10:21 AM·/u/Nunki08
DeepSeek V4 AI Beats Billion Dollar Systems…For Free
DeepSeek V4 is a powerful new open-source AI model that reportedly outperforms expensive commercial systems, offering advanced capabilities for free.
DeepSeek has released its new AI model, DeepSeek V4, which is being highlighted for its impressive performance. The model reportedly surpasses the capabilities of much larger and more expensive "billion-dollar" proprietary systems, yet it is available for free. This release signifies a notable advancement in the open-source LLM landscape, potentially democratizing access to high-tier AI capabilities. For creative non-developers and hobbyists, this means access to a powerful tool without significant financial investment, pushing the boundaries of what's achievable with freely available AI.
Two Minute Papers·model_release·05/06/2026, 04:07 PM·Two Minute Papers▶Watch here

Deepseek nears $45 billion valuation as China's state chip fund leads round
Deepseek is securing $45B in state-backed funding, solidifying its position as the primary global rival to OpenAI and Anthropic.
Deepseek is reportedly finalizing a funding round that would value the Chinese AI lab at approximately $45 billion. The round is led by China's state-backed semiconductor fund, indicating strong government support for domestic AI development. This massive valuation leap follows the global success of their DeepSeek-V3 and R1 models, which demonstrated high efficiency at significantly lower costs than Western counterparts. The investment highlights the intensifying AI arms race between China and the US, specifically focusing on compute and model training capabilities. This capital injection will likely fuel further research into large-scale reasoning models and infrastructure to bypass hardware restrictions.
The Decoder·news·05/06/2026, 01:22 PM·Maximilian SchreinerDeepSeek V4 being 17x cheaper got me to actually measure what I send to cloud vs what I could run locally. the results are stupid.
Most coding tasks don't need expensive cloud models; routing simple tasks to a local LLM can cut your API bill by 75% without losing quality.
A developer conducted a 10-day experiment comparing a local Qwen 3.6 27b model (running on an RTX 3090) against frontier cloud models like GPT-5.2. The analysis revealed that 65% of daily coding tasks, such as project scanning and boilerplate generation, performed identically on local hardware. For debugging with multi-file context, local models reached 61% accuracy, while complex architecture decisions still required cloud intervention, representing only 15% of total tasks. By implementing a task-routing strategy, the author reduced their monthly API costs from $85 to $22. This case study highlights that the massive price gap between local and cloud models often doesn't justify the performance difference for routine work.
r/LocalLLaMA·tooling·05/05/2026, 08:55 PM·/u/spencer_kwDeepSeek V4 being 17x cheaper got me to actually measure what I send to cloud vs what I could run locally. the results are stupid.
Stop overpaying for cloud AI: 65% of coding tasks can be handled locally with zero quality loss, potentially cutting your API bills by 75%.
A developer conducted a 10-day experiment comparing a local Qwen 3.6 27b model on an RTX 3090 against cloud frontier models like GPT-5.2 for daily coding tasks. The results revealed that 65% of tasks, including file scanning and boilerplate generation, were handled identically by the local model. While complex debugging and architectural decisions still favored cloud models, these accounted for only 15% of the total workload. By routing simpler tasks to local hardware and reserving cloud for high-complexity work, the author reduced their monthly API bill from $85 to $22. This highlights a significant 'laziness tax' where users overpay for cloud intelligence on tasks that local hardware can easily manage.
r/LocalLLaMA·tooling·05/05/2026, 08:55 PM·spencer_kw
Amazon brings agentic fine-tuning to SageMaker with support for Llama, Qwen, Deepseek, and Nova
Amazon SageMaker now offers an AI agent to automate and simplify the fine-tuning process for popular open-source models like Llama and Deepseek.
Amazon has updated SageMaker AI to include agentic fine-tuning, a feature designed to streamline the model customization process. This new AI agent assists developers in selecting hyperparameters and managing the training workflow for various LLMs. Supported models include Meta's Llama, Alibaba's Qwen, Deepseek, and Amazon's own Nova series. The goal is to lower the barrier for creating specialized models tailored for specific agentic tasks. By automating complex parts of the fine-tuning pipeline, AWS aims to make high-performance model adaptation more accessible to a broader range of developers.
The Decoder·tooling·05/05/2026, 10:08 AM·Maximilian Schreiner
DeepSeek V4 Pro matches GPT-5.2 on FoodTruck Bench, our agentic benchmark — 10 weeks later, ~17× cheaper
DeepSeek V4 Pro offers GPT-5.2 level agentic performance at 1/17th the cost, narrowing the US-China AI gap to just 10 weeks.
DeepSeek V4 Pro has achieved performance parity with GPT-5.2 on the FoodTruck Bench, a complex 30-day agentic simulation involving 34 tools and persistent memory. While GPT-5.2 was tested in February, DeepSeek matched its results only ten weeks later, signaling a rapid closing of the gap between US and Chinese frontier models. Crucially, DeepSeek is approximately 17 times cheaper for agentic workloads, with significantly lower input/output pricing. The model also demonstrated superior consistency compared to Grok 4.3, showing lower variance in outcomes and better resource management. Additionally, Xiaomi’s MiMo v2.5 Pro also entered the top 6, further establishing Chinese models as high-value competitors in the frontier tier.
r/LocalLLaMA·model_release·05/05/2026, 06:51 AM·Disastrous_Theme5906
DeepSeek V4 Pro matches GPT-5.2 on FoodTruck Bench, our agentic benchmark — 10 weeks later, ~17× cheaper
DeepSeek V4 Pro delivers GPT-5.2 level agentic performance at 1/17th the cost, effectively closing the US-China AI gap to just 10 weeks.
DeepSeek V4 Pro has demonstrated performance parity with GPT-5.2 on the FoodTruck Bench, a rigorous 30-day agentic simulation requiring the use of 34 distinct tools and persistent memory. While ranking #4 overall, the model stays within 3% of GPT-5.2's median performance and shows superior consistency compared to Grok 4.3, with significantly less resource waste. The most significant disruption is the pricing: at $0.435/M input, it is approximately 17 times cheaper than GPT-5.2 for identical agentic workloads. This release marks a significant closing of the US-China frontier gap, now estimated at just ten weeks. The benchmark also saw a strong debut from Xiaomi’s MiMo v2.5 Pro, further populating the leaderboard with high-efficiency Chinese models.
r/LocalLLaMA·model_release·05/05/2026, 06:51 AM·/u/Disastrous_Theme5906
Relevance auto-scored by LLM (0–10). List shows top 30 from the last 7 days.