AI pulse last 7 days
Daily AI pulse from YouTube, blogs, Reddit, HN. Ruthlessly filtered.
Sources (41)▶
- criticalAndrej Karpathy
Były dyrektor AI w Tesli, OpenAI cofounder. Każde video to gold.
- criticalAnthropic
Oficjalny kanał Anthropic. Każdy release Claude'a.
- criticalComfyUI Blog
Release log dla integracji ComfyUI — Luma Uni-1, GPT Image 2, ACE-Step music gen, Seedance. Pokrywa video+image+music+workflow.
- criticalOpenAI Blog
Oficjalny blog OpenAI. Wszystkie release.
- criticalSimon Willison's Weblog
Najlepszy 'thinker' AI. Codzienne posty, deep insights, niska hype rate.
- highAI Explained
Głęboka analiza papers i benchmarków, niska hype rate.
- highAI Jason
Praktyczne tutoriale Claude Code, MCP, workflow vibe codingu.
- highBen's Bites
Daily AI digest, creator-friendly tone. Codex, model releases, agentic AI.
- highCole Medin
Vibe coding + agentic workflows + Claude Code MCP integrations.
- highFal AI Blog
Fal hostuje większość nowych AI image/video modeli — ich blog to wczesne sygnały premier.
- highHN: 3D & Gaussian Splatting
HN signal dla 3D generative — Gaussian Splatting, NeRF, image-to-3D. Próg 20 bo niszowa kategoria (top historic 182pts).
- highHN: AI agents / MCP
HN posty o agentach, MCP, vibe codingu z min 100 pkt.
- highHN: Claude / Anthropic
HN posty z 'Claude' lub 'Anthropic' z min 100 pkt.
- highHugging Face Blog
Releases dla image, video, audio, 3D modeli. Część tech-heavy — Gemini relevance odfiltruje noise. Downgraded z critical: za duży volume na 'must-read' status.
- highIndyDevDan
Claude Code power user, prompty, hooki.
- highInterconnects (Nathan Lambert)
AI policy + research analysis. Niska hype rate, opinionated.
- highLatent Space
Podcast + blog Swyx — wywiady z founderami i deep dives engineeringowe.
- highMatt Wolfe
Comprehensive AI tools weekly digest. ~700K subs.
- highMatthew Berman
AI news, model release reviews, agent demos. Wysoki output.
- highr/aivideo
Community AI video — Sora, Veo, Runway, Kling, LTX. Co naprawdę zaskakuje twórców.
- highr/ClaudeAI
Społeczność Claude'a — power users, tipy, problemy.
- highr/LocalLLaMA
Open-source LLMs, lokalne uruchamianie, benchmarks bez hype.
- highr/StableDiffusion
Największa community open-source image gen (700k+ users). Premiery modeli, LoRA, ComfyUI workflows.
- highRiley Brown
Vibe coding, AI builder workflows, Cursor + Claude tutorials.
- highThe Decoder
Niemiecki AI news outlet po angielsku, dobre breaking news.
- highTheo - t3.gg
TypeScript + AI dev workflows. Hot takes, narrative-driven.
- highYannic Kilcher
Paper reviews i deep dives w research AI.
- lowAI Weirdness
Janelle Shane — playful AI experiments, image gen quirks. Niski volume, unikalna perspektywa.
- mediumbycloud
AI papers digestible — między 2MP a Yannic Kilcher.
- mediumCreative Bloq
Design industry — gdzie AI ingeruje w klasyczne dyscypliny graficzne.
- mediumFireship
100-sec format, often AI/LLM + tech news.
- mediumfxguide
VFX i film industry — coraz więcej AI w pipeline. Profesjonalna perspektywa.
- mediumGreg Isenberg
Solo founder vibe — buduje produkty z AI, podcasty z indie hackers.
- mediumr/ChatGPTCoding
Vibe coding tipy, IDE setupy, prompty. Mix wszystkich modeli.
- mediumr/comfyui
ComfyUI workflows — custom nodes, JSON workflows, optymalizacje.
- mediumr/midjourney
Midjourney community — premiery v7+, style references, prompt patterns.
- mediumr/runwayml
Runway-specific community — premiery features, prompt patterns, comparisons z konkurencją.
- mediumr/SunoAI
Suno music gen community — nowe wersje modelu, lyric prompting techniques. Audio AI ma slaby RSS ecosystem.
- mediumTina Huang
AI workflows for data science, practical applications.
- mediumTwo Minute Papers
Krótkie streszczenia papers AI, świetne dla szybkiego scan'a.
- mediumWes Roth
AI news z bardziej clickbaitowym tonem — filtr Gemini odsiewa hype.
Decoupled Attention from Weights - Gemma 4 26B
Run massive models like Gemma 4 26B by splitting attention and weights across multiple cheap local machines, bypassing single-GPU VRAM limits.
Larql introduces a method to decouple attention mechanisms from model weights, specifically demonstrated with Gemma 4 26B. This approach allows users to split the memory load across multiple local machines, keeping the attention mechanism on a primary device while offloading the massive weight matrices to a secondary, cheaper server like an old Xeon. This effectively bypasses the VRAM bottleneck that typically limits local LLM performance and model size. The repository includes functional code to implement this distributed inference strategy. It represents a significant shift for home lab enthusiasts who want to run large-scale models without investing in high-end enterprise GPUs.
r/LocalLLaMA·tooling·05/06/2026, 11:56 AM·/u/yeah-okHeretic 1.3 released: Reproducible models, integrated benchmarking system, reduced peak VRAM usage, broader model support, and more
Heretic 1.3 brings byte-for-byte reproducibility to model abliteration, integrated benchmarking, and lower VRAM requirements for processing large models like Qwen 3.5.
Heretic 1.3, the leading tool for LLM abliteration (decensoring), introduces several major technical updates focused on transparency and efficiency. The headline feature is a reproducibility system that allows users to generate byte-for-byte identical models by capturing environment metadata, including GPU drivers and library versions. A new integrated benchmarking suite based on lm-evaluation-harness enables running MMLU and GSM8K tests directly within the tool to verify model quality. Additionally, peak VRAM usage has been significantly reduced, and support has been expanded to include latest-generation architectures like Qwen 3.5 and Gemma 4. This release solidifies Heretic's position as a professional-grade utility for the local LLM community.
r/LocalLLaMA·tooling·05/05/2026, 02:57 PM·-p-e-w-
LTX2.3 8GB VRAM WorkFlow
Run the LTX2.3 video model on budget GPUs (8GB VRAM) using this optimized, multi-step ComfyUI workflow.
This Reddit post introduces a specialized ComfyUI workflow designed to run the LTX2.3 video generation model on GPUs with only 8GB of VRAM, such as the RTX 3060 Ti. Traditionally, high-end video models require significant hardware resources, but this optimization makes the technology accessible to hobbyists. The workflow achieves stability by generating initial video at a lower resolution at 24fps, then handling upscaling and frame interpolation as separate, decoupled steps. It supports both Text-to-Video and Image-to-Video modes, with the latter recommended for maintaining character consistency. This release provides a practical starting point for creative users who want to experiment with state-of-the-art video AI without expensive hardware upgrades.
r/StableDiffusion·tooling·05/05/2026, 12:46 PM·/u/Extension-Yard1918
My LTX 2.3 LoRA Training Journey: Fighting for VRAM even with a 5090
Training LTX 2.3 LoRAs on 32GB VRAM is viable by disabling audio modules and using official scripts, with results generalizing well to high-res video.
A detailed technical report on training a LoRA for the LTX 2.3 video model using an RTX 5090. The author highlights that AI-Toolkit proved unstable, leading them to use official training scripts refined with the help of Claude. To fit the training within 32GB of VRAM, it was mandatory to disable the audio module and limit resolution to 512x512 at 49 frames. Performance metrics showed 0.58 steps per second, with 1500 steps completed in 40 minutes. The resulting LoRA successfully captured specific 2D animation motion patterns and generalized well to higher resolutions and 121-frame sequences during inference.
r/StableDiffusion·tutorial·05/05/2026, 10:22 AM·/u/ovpresentme
LTX-2.3 + Union Control LoRA (8GB VRAM)
Generate high-quality 1280x640 LTX-2.3 videos with precise control on an 8GB VRAM GPU using this optimized ComfyUI workflow.
A new ComfyUI workflow demonstrates high-resolution video generation (1280x640) using the LTX-2.3 model on consumer-grade hardware with only 8GB of VRAM. By integrating the Union Control LoRA, users can achieve precise structural control over the video output, which was previously difficult on low-memory GPUs. The author provides a complete package including a Hugging Face repository for the workflow and a step-by-step YouTube tutorial. This release is significant for the creative community as it lowers the barrier to entry for high-quality AI cinematography. The pipeline uses Nano Banana for the initial frame generation before passing it to LTX-2.3 for temporal consistency.
r/comfyui·tooling·05/05/2026, 02:14 AM·/u/big-boss_97
Relevance auto-scored by LLM (0–10). List shows top 30 from the last 7 days.