1.23.2026

ServiceNow positions itself as the control layer for
enterprise AI execution

ServiceNow announced a multi-year partnership with OpenAI to bring GPT-5.2 into its AI Control Tower and Xanadu platform, reinforcing ServiceNow’s strategy to focus on enterprise workflows, guardrails, and orchestration rather than building frontier models itself.

1.22.2026

DEPLOY Fully Private + Local AI RAG Agents

In this tutorial, I'll show you how to build a production-grade multimodal RAG system that never leaves your infrastructure. Zero external API calls. Complete data sovereignty.



1.21.2026

Open Responses - The NEW Standard API for Open Models

In this video, I look at the Open Responses Standard that's been released by OpenAI to support open models with their Responses SDK.



1.20.2026

Why Google Antigravity Suddenly Makes Sense

Antigravity from Google is changing how developers code with AI. This Google Antigravity tutorial covers the new agent harness with Gemini 3 Pro, showing you workflows that rival the best tools.



1.16.2026

Task Queues Are Replacing Chat Interfaces

In this video, I share the inside scoop on why Claude Cowork matters more than the feature list suggests:

 • Why file system agents beat browser agents for high-stakes work
 • How the anti-slop architecture shifts cognitive load upstream
 • What task queues replacing chat means for AI interaction
 • Why Anthropic shipped this in 10 days using their own tool



1.15.2026

Antigravity NEW Update is HUGE!

The latest Antigravity update brings Agent Skills, Subagents, AI Automation, and more, taking your agentic workflows to the next level. Learn how to build smarter, faster, and fully automated projects with the latest features.



1.14.2026

This new, dead simple prompt technique boosts accuracy on LLMs by up to 76% on non-reasoning tasks

Google Researchers have found that simply repeating the input query—literally copying and pasting the prompt so it appears twice—consistently improves performance across major models including Gemini, GPT-4o, Claude, and DeepSeek.

The paper, titled "Prompt Repetition Improves Non-Reasoning LLMs," presents a finding that is almost suspiciously simple: for tasks that don’t require complex reasoning steps, stating the prompt twice yields significantly better results than stating it once.

1.13.2026

NVIDIA's 13 New Models

Nvidia dropped 13 open models at their CES 2026 event. NVIDIA’s open models — trained on NVIDIA’s own supercomputers — are powering breakthroughs across healthcare, climate science, robotics, embodied intelligence and autonomous driving.



1.12.2026

Google announces a new protocol to facilitate commerce
using AI agents

Google today announced a new open standard, called the Universal Commerce Protocol (UCP) for AI agent-based shopping, at the National Retail Federation (NRF) conference.

The standard, developed with companies like Shopify, Etsy, Wayfair, Target, and Walmart, lets agents work across different parts of customer buying processes, including discovery and post-purchase support. The core idea is that the standard could facilitate these various parts of the process instead of requiring connections with different agents.

1.09.2026

Claude Code 2.1.0 arrives with smoother workflows
and smarter agents

Anthropic has released Claude Code v2.1.0, a notable update to its "vibe coding" development environment for autonomously building software, spinning up AI agents, and completing a wide range of computer tasks, according to Head of Claude Code Boris Cherny in a post on X last night.

1.08.2026

MiroMind’s MiroThinker 1.5 delivers trillion-parameter performance from a 30B model — at 1/20th the cost

Joining the ranks of a growing number of smaller, powerful reasoning models is MiroThinker 1.5 from MiroMind, with just 30 billion parameters, compared to the hundreds of billions or trillions used by leading foundation large language models (LLMs).

But MiroThinker 1.5 stands out among these smaller reasoners for one major reason: it offers agentic research capabilities rivaling trillion-parameter competitors like Kimi K2 and DeepSeek, at a fraction of the inference cost.

1.07.2026

The Last Claude Code Tutorial You'll Ever Need

Master ai coding workflows with context engineering—the real principles behind every vibe coding framework. Whether you use Claude Code or Cursor AI, these fundamentals will transform how you build apps.



1.06.2026

TII’s Falcon H1R 7B can out-reason models up to 7x its size —
and it’s (mostly) open

By abandoning the pure Transformer orthodoxy in favor of a hybrid architecture, TII claims to have built a 7-billion parameter model that not only rivals but outperforms competitors nearly 7X its size — including the 32B and 47B variants of Alibaba's Qwen and Nvidia's Nemotron.

1.05.2026

Why Notion’s biggest AI breakthrough came from simplifying everything

When initially experimenting with LLMs and agentic AI, software engineers at Notion AI applied advanced code generation, complex schemas, and heavy instructioning. 

Quickly, though, trial and error taught the team that it could get rid of all of that complicated data modeling. Notion’s AI engineering lead Ryan Nystrom and his team pivoted to simple prompts, human-readable representations, minimal abstraction, and familiar markdown formats. The result was dramatically improved model performance.