AI pulse last 7 days
Daily AI pulse from YouTube, blogs, Reddit, HN. Ruthlessly filtered.
Sources (41)▶
- criticalAndrej Karpathy
Były dyrektor AI w Tesli, OpenAI cofounder. Każde video to gold.
- criticalAnthropic
Oficjalny kanał Anthropic. Każdy release Claude'a.
- criticalComfyUI Blog
Release log dla integracji ComfyUI — Luma Uni-1, GPT Image 2, ACE-Step music gen, Seedance. Pokrywa video+image+music+workflow.
- criticalOpenAI Blog
Oficjalny blog OpenAI. Wszystkie release.
- criticalSimon Willison's Weblog
Najlepszy 'thinker' AI. Codzienne posty, deep insights, niska hype rate.
- highAI Explained
Głęboka analiza papers i benchmarków, niska hype rate.
- highAI Jason
Praktyczne tutoriale Claude Code, MCP, workflow vibe codingu.
- highBen's Bites
Daily AI digest, creator-friendly tone. Codex, model releases, agentic AI.
- highCole Medin
Vibe coding + agentic workflows + Claude Code MCP integrations.
- highFal AI Blog
Fal hostuje większość nowych AI image/video modeli — ich blog to wczesne sygnały premier.
- highHN: 3D & Gaussian Splatting
HN signal dla 3D generative — Gaussian Splatting, NeRF, image-to-3D. Próg 20 bo niszowa kategoria (top historic 182pts).
- highHN: AI agents / MCP
HN posty o agentach, MCP, vibe codingu z min 100 pkt.
- highHN: Claude / Anthropic
HN posty z 'Claude' lub 'Anthropic' z min 100 pkt.
- highHugging Face Blog
Releases dla image, video, audio, 3D modeli. Część tech-heavy — Gemini relevance odfiltruje noise. Downgraded z critical: za duży volume na 'must-read' status.
- highIndyDevDan
Claude Code power user, prompty, hooki.
- highInterconnects (Nathan Lambert)
AI policy + research analysis. Niska hype rate, opinionated.
- highLatent Space
Podcast + blog Swyx — wywiady z founderami i deep dives engineeringowe.
- highMatt Wolfe
Comprehensive AI tools weekly digest. ~700K subs.
- highMatthew Berman
AI news, model release reviews, agent demos. Wysoki output.
- highr/aivideo
Community AI video — Sora, Veo, Runway, Kling, LTX. Co naprawdę zaskakuje twórców.
- highr/ClaudeAI
Społeczność Claude'a — power users, tipy, problemy.
- highr/LocalLLaMA
Open-source LLMs, lokalne uruchamianie, benchmarks bez hype.
- highr/StableDiffusion
Największa community open-source image gen (700k+ users). Premiery modeli, LoRA, ComfyUI workflows.
- highRiley Brown
Vibe coding, AI builder workflows, Cursor + Claude tutorials.
- highThe Decoder
Niemiecki AI news outlet po angielsku, dobre breaking news.
- highTheo - t3.gg
TypeScript + AI dev workflows. Hot takes, narrative-driven.
- highYannic Kilcher
Paper reviews i deep dives w research AI.
- lowAI Weirdness
Janelle Shane — playful AI experiments, image gen quirks. Niski volume, unikalna perspektywa.
- mediumbycloud
AI papers digestible — między 2MP a Yannic Kilcher.
- mediumCreative Bloq
Design industry — gdzie AI ingeruje w klasyczne dyscypliny graficzne.
- mediumFireship
100-sec format, often AI/LLM + tech news.
- mediumfxguide
VFX i film industry — coraz więcej AI w pipeline. Profesjonalna perspektywa.
- mediumGreg Isenberg
Solo founder vibe — buduje produkty z AI, podcasty z indie hackers.
- mediumr/ChatGPTCoding
Vibe coding tipy, IDE setupy, prompty. Mix wszystkich modeli.
- mediumr/comfyui
ComfyUI workflows — custom nodes, JSON workflows, optymalizacje.
- mediumr/midjourney
Midjourney community — premiery v7+, style references, prompt patterns.
- mediumr/runwayml
Runway-specific community — premiery features, prompt patterns, comparisons z konkurencją.
- mediumr/SunoAI
Suno music gen community — nowe wersje modelu, lyric prompting techniques. Audio AI ma slaby RSS ecosystem.
- mediumTina Huang
AI workflows for data science, practical applications.
- mediumTwo Minute Papers
Krótkie streszczenia papers AI, świetne dla szybkiego scan'a.
- mediumWes Roth
AI news z bardziej clickbaitowym tonem — filtr Gemini odsiewa hype.
Is anyone actually getting good results with Flux2.DEV?
If you're struggling to get sharp, realistic images from Flux2.DEV, you're not alone; a user reports consistent issues with hazy outputs and a limited LoRA ecosystem, seeking comm…
A Reddit user on r/StableDiffusion, /u/Extension-Yard1918, has reported persistent issues achieving sharp, realistic images with the Flux2.DEV model over several months of testing. Despite efforts like increasing resolution and step count, and experimenting with different samplers and settings, the generated outputs consistently appear hazy, soft, or foggy, failing to match the quality of models like Z-Image Turbo. The user also notes a weak image editing feature and a nearly nonexistent LoRA ecosystem, questioning if the problem lies with the model's training data, VAE, scheduler, or their own workflow. They are seeking practical advice and specific settings from the community to unlock Flux2.DEV's potential.
r/StableDiffusion·opinion·05/07/2026, 09:15 AM·/u/Extension-Yard1918LTX 2.3 ComfyUI – Identity drift in Image-to-Video (first/last frame not stable)
LTX 2.3 users are reporting issues with identity drift in Image-to-Video workflows, where the subject's appearance changes between the first and last frames.
Users of the LTX 2.3 video generation model are reporting significant identity drift when using Image-to-Video (I2V) workflows in ComfyUI. The issue manifests as a lack of consistency where the subject's features change noticeably from the initial frame to the end of the sequence. This stability problem affects the professional utility of the model for character-driven content. Community discussions suggest that while LTX 2.3 offers improvements in motion, frame-one conditioning remains a challenge. Creators are currently looking for workflow workarounds or specific node configurations to lock the identity throughout the generation process.
r/comfyui·tooling·05/06/2026, 11:53 AM·/u/White_Dragon_0Trying to use V2V to extend videos and create long-form in LTX2.3. Quality degrading over time.
Extending videos in LTX-2.3 using V2V workflows often leads to quality degradation after 30 seconds due to recursive referencing and artifact accumulation.
A user on r/comfyui is reporting significant quality loss when attempting to extend 10-second clips into 1-minute videos using the LTX-2.3 model. The process involves using Rune's V2V (Video-to-Video) workflow, which relies on the final 3 seconds of a previous segment to generate the next. By the 30-second mark, which is the third iteration, the visual fidelity begins to break down. This highlights a common 'drift' issue in recursive video generation where artifacts and noise accumulate over time. The discussion points to the limitations of current LTX-2.3 workflows for long-form content without more robust context management or latent refreshing.
r/comfyui·tooling·05/05/2026, 08:10 PM·/u/BarelyAI
Relevance auto-scored by LLM (0–10). List shows top 30 from the last 7 days.