AI pulse last 7 days
Daily AI pulse from YouTube, blogs, Reddit, HN. Ruthlessly filtered.
Sources (41)▶
- criticalAndrej Karpathy
Były dyrektor AI w Tesli, OpenAI cofounder. Każde video to gold.
- criticalAnthropic
Oficjalny kanał Anthropic. Każdy release Claude'a.
- criticalComfyUI Blog
Release log dla integracji ComfyUI — Luma Uni-1, GPT Image 2, ACE-Step music gen, Seedance. Pokrywa video+image+music+workflow.
- criticalOpenAI Blog
Oficjalny blog OpenAI. Wszystkie release.
- criticalSimon Willison's Weblog
Najlepszy 'thinker' AI. Codzienne posty, deep insights, niska hype rate.
- highAI Explained
Głęboka analiza papers i benchmarków, niska hype rate.
- highAI Jason
Praktyczne tutoriale Claude Code, MCP, workflow vibe codingu.
- highBen's Bites
Daily AI digest, creator-friendly tone. Codex, model releases, agentic AI.
- highCole Medin
Vibe coding + agentic workflows + Claude Code MCP integrations.
- highFal AI Blog
Fal hostuje większość nowych AI image/video modeli — ich blog to wczesne sygnały premier.
- highHN: 3D & Gaussian Splatting
HN signal dla 3D generative — Gaussian Splatting, NeRF, image-to-3D. Próg 20 bo niszowa kategoria (top historic 182pts).
- highHN: AI agents / MCP
HN posty o agentach, MCP, vibe codingu z min 100 pkt.
- highHN: Claude / Anthropic
HN posty z 'Claude' lub 'Anthropic' z min 100 pkt.
- highHugging Face Blog
Releases dla image, video, audio, 3D modeli. Część tech-heavy — Gemini relevance odfiltruje noise. Downgraded z critical: za duży volume na 'must-read' status.
- highIndyDevDan
Claude Code power user, prompty, hooki.
- highInterconnects (Nathan Lambert)
AI policy + research analysis. Niska hype rate, opinionated.
- highLatent Space
Podcast + blog Swyx — wywiady z founderami i deep dives engineeringowe.
- highMatt Wolfe
Comprehensive AI tools weekly digest. ~700K subs.
- highMatthew Berman
AI news, model release reviews, agent demos. Wysoki output.
- highr/aivideo
Community AI video — Sora, Veo, Runway, Kling, LTX. Co naprawdę zaskakuje twórców.
- highr/ClaudeAI
Społeczność Claude'a — power users, tipy, problemy.
- highr/LocalLLaMA
Open-source LLMs, lokalne uruchamianie, benchmarks bez hype.
- highr/StableDiffusion
Największa community open-source image gen (700k+ users). Premiery modeli, LoRA, ComfyUI workflows.
- highRiley Brown
Vibe coding, AI builder workflows, Cursor + Claude tutorials.
- highThe Decoder
Niemiecki AI news outlet po angielsku, dobre breaking news.
- highTheo - t3.gg
TypeScript + AI dev workflows. Hot takes, narrative-driven.
- highYannic Kilcher
Paper reviews i deep dives w research AI.
- lowAI Weirdness
Janelle Shane — playful AI experiments, image gen quirks. Niski volume, unikalna perspektywa.
- mediumbycloud
AI papers digestible — między 2MP a Yannic Kilcher.
- mediumCreative Bloq
Design industry — gdzie AI ingeruje w klasyczne dyscypliny graficzne.
- mediumFireship
100-sec format, often AI/LLM + tech news.
- mediumfxguide
VFX i film industry — coraz więcej AI w pipeline. Profesjonalna perspektywa.
- mediumGreg Isenberg
Solo founder vibe — buduje produkty z AI, podcasty z indie hackers.
- mediumr/ChatGPTCoding
Vibe coding tipy, IDE setupy, prompty. Mix wszystkich modeli.
- mediumr/comfyui
ComfyUI workflows — custom nodes, JSON workflows, optymalizacje.
- mediumr/midjourney
Midjourney community — premiery v7+, style references, prompt patterns.
- mediumr/runwayml
Runway-specific community — premiery features, prompt patterns, comparisons z konkurencją.
- mediumr/SunoAI
Suno music gen community — nowe wersje modelu, lyric prompting techniques. Audio AI ma slaby RSS ecosystem.
- mediumTina Huang
AI workflows for data science, practical applications.
- mediumTwo Minute Papers
Krótkie streszczenia papers AI, świetne dla szybkiego scan'a.
- mediumWes Roth
AI news z bardziej clickbaitowym tonem — filtr Gemini odsiewa hype.

Moodboard 6 - Digital landscape
See a stunning "digital landscape" image created with Midjourney, complete with the exact parameters used for inspiration and experimentation.
Reddit user /u/Heath_co shared a captivating "Moodboard 6 - Digital landscape" image, showcasing the creative capabilities of Midjourney. The post includes the specific parameters used: --profile e762978 --v 8.1 --stylize 1000 --hd. This example highlights how precise parameter tuning can achieve distinct aesthetic results, particularly with a high stylize value and the --hd flag for enhanced detail. While not a new feature release, it provides a concrete instance of artistic expression and technical application for those exploring AI image generation. It serves as valuable inspiration for hobbyists and creative non-developers looking to replicate or adapt similar styles.
r/midjourney·creative_work·05/07/2026, 12:17 PM·/u/Heath_co
I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso, no training, arbitrary refs
Mix different artistic styles in specific parts of an image using masks and IP-Adapters without any training or fine-tuning.
A new open-source tool allows users to apply distinct artistic styles to specific regions of an image using spatial masks. Built on Stable Diffusion 1.5, the system utilizes ControlNet (Canny and Tile) for structural integrity and two IP-Adapters for style injection. The technical core involves spatial routing, where each adapter's contribution is masked within the cross-attention layers to prevent 'muddy' averaging of styles. It offers three modes: global mixing, painterly emphasis, and region-specific stylization. While effective, the author notes that aggressive style weights can distort realistic faces and small color details. The project includes a GitHub repository with a Colab notebook and a Hugging Face Space for testing.
r/StableDiffusion·tooling·05/07/2026, 09:24 AM·/u/Longjumping_Gur_937Is anyone actually getting good results with Flux2.DEV?
If you're struggling to get sharp, realistic images from Flux2.DEV, you're not alone; a user reports consistent issues with hazy outputs and a limited LoRA ecosystem, seeking comm…
A Reddit user on r/StableDiffusion, /u/Extension-Yard1918, has reported persistent issues achieving sharp, realistic images with the Flux2.DEV model over several months of testing. Despite efforts like increasing resolution and step count, and experimenting with different samplers and settings, the generated outputs consistently appear hazy, soft, or foggy, failing to match the quality of models like Z-Image Turbo. The user also notes a weak image editing feature and a nearly nonexistent LoRA ecosystem, questioning if the problem lies with the model's training data, VAE, scheduler, or their own workflow. They are seeking practical advice and specific settings from the community to unlock Flux2.DEV's potential.
r/StableDiffusion·opinion·05/07/2026, 09:15 AM·/u/Extension-Yard1918
Never got good results from Klein? Me neither, til now
Stop using turbo LoRAs with Klein 9B; it achieves peak quality and speed with just 4 steps natively.
A user on r/comfyui discovered why many creators struggle to get high-quality results from the Klein 9B model. The issue stems from incorrectly applying turbo LoRAs or using too many sampling steps, which degrades the output. Klein 9B is designed to be natively fast and performs optimally with only 4 steps without any speed-up modifications. The post includes a downloadable ComfyUI workflow and clarifies licensing terms, stating that while outputs can be used commercially, the model itself requires a commercial license from Black Forest Labs for business use. This finding explains the polarizing reception of the model and provides a clear path to better prompt adherence and speed.
r/comfyui·tutorial·05/07/2026, 01:43 AM·/u/Support_Marmoset
Ernie Image Lora training - my take
Practical insights and visual benchmarks for training LoRAs on the Ernie model, highlighting necessary adjustments to standard training workflows.
The author presents their findings and visual results from training a LoRA on the Ernie image model, a less common alternative to the Stable Diffusion ecosystem. The post includes specific technical insights into the training process, highlighting how hyperparameters like learning rate and rank need adjustment compared to standard SDXL workflows. Visual benchmarks provided via Imgur demonstrate the model's proficiency in handling complex architectural details and specific artistic styles. This contribution is particularly valuable for users looking to diversify their toolkit beyond mainstream models and understand the nuances of cross-architecture fine-tuning. It serves as both a technical guide and a proof-of-concept for the Ernie model's flexibility.
r/StableDiffusion·tutorial·05/06/2026, 10:53 PM·/u/malcolmrey
Wireframe - Flux.2 Klein 9b style LORA
New 'Wireframe' style LoRA for Flux.2 Klein 9b enables technical, mesh-like aesthetics in AI generations using the trigger word 'dvr_wf_style'.
Developer Dever has released a specialized Wireframe style LoRA designed specifically for the Flux.2 Klein 9b distilled model. This LoRA allows users to generate or edit images to have a technical, 3D-mesh aesthetic using the trigger word 'dvr_wf_style'. It was trained on the 9b base as a text-to-image model but demonstrates high flexibility in image-to-image editing tasks. The weights are hosted on Huggingface, where the author maintains a repository of various style LoRAs for the Flux ecosystem. This release is particularly relevant for creators looking for architectural or blueprint-like visuals within the Flux.2 framework.
r/StableDiffusion·model_release·05/05/2026, 10:20 PM·/u/TheDudeWithThePlan
Luma Uni-1 is now available via Partner Nodes
Luma's Uni-1, an autoregressive model that reasons before drawing, is now available in ComfyUI, offering superior prompt adherence and text rendering.
Luma AI has integrated its Uni-1 model into ComfyUI via new Partner Nodes. Unlike traditional diffusion models, Uni-1 uses a decoder-only autoregressive transformer architecture that processes text and images as a single interleaved sequence. This allows the model to reason through complex prompts, decomposing instructions and planning composition before generating pixels. Key features include high-quality text rendering, material accuracy, and temporal consistency across multi-panel outputs. Users can access it now through Comfy Cloud or by installing the specific partner nodes in their local workflows.
ComfyUI Blog·model_release·05/05/2026, 04:04 PM·Purz
A new open weights image model appears in ArtificialAnalysis. Outperforming Flux.2 Pro and Z Image Turbo.
A new open-weights image model has topped the ArtificialAnalysis leaderboard, outperforming Flux.2 Pro and Z Image Turbo in human preference tests.
A new open-weights image generation model has surfaced on the ArtificialAnalysis leaderboard, claiming the top spot over established models like Flux.2 Pro and Z Image Turbo. This model's performance in Elo-based human preference rankings suggests a significant leap in quality for the open-source community. This development is crucial as it challenges the dominance of closed-source or 'pro' tier models in visual fidelity and prompt adherence. The community is currently dissecting the model's architecture and availability for local deployment. Early data indicates superior handling of complex textures and lighting compared to its predecessors, marking a potential shift in the state-of-the-art for local image generation.
r/StableDiffusion·model_release·05/04/2026, 07:07 PM·/u/Murky_Foundation5528
Relevance auto-scored by LLM (0–10). List shows top 30 from the last 7 days.