1.23.2026

ServiceNow positions itself as the control layer for
enterprise AI execution

ServiceNow announced a multi-year partnership with OpenAI to bring GPT-5.2 into its AI Control Tower and Xanadu platform, reinforcing ServiceNow’s strategy to focus on enterprise workflows, guardrails, and orchestration rather than building frontier models itself.

1.22.2026

DEPLOY Fully Private + Local AI RAG Agents

In this tutorial, I'll show you how to build a production-grade multimodal RAG system that never leaves your infrastructure. Zero external API calls. Complete data sovereignty.



1.21.2026

Open Responses - The NEW Standard API for Open Models

In this video, I look at the Open Responses Standard that's been released by OpenAI to support open models with their Responses SDK.



1.20.2026

Why Google Antigravity Suddenly Makes Sense

Antigravity from Google is changing how developers code with AI. This Google Antigravity tutorial covers the new agent harness with Gemini 3 Pro, showing you workflows that rival the best tools.



1.16.2026

Task Queues Are Replacing Chat Interfaces

In this video, I share the inside scoop on why Claude Cowork matters more than the feature list suggests:

 • Why file system agents beat browser agents for high-stakes work
 • How the anti-slop architecture shifts cognitive load upstream
 • What task queues replacing chat means for AI interaction
 • Why Anthropic shipped this in 10 days using their own tool



1.15.2026

Antigravity NEW Update is HUGE!

The latest Antigravity update brings Agent Skills, Subagents, AI Automation, and more, taking your agentic workflows to the next level. Learn how to build smarter, faster, and fully automated projects with the latest features.



1.14.2026

This new, dead simple prompt technique boosts accuracy on LLMs by up to 76% on non-reasoning tasks

Google Researchers have found that simply repeating the input query—literally copying and pasting the prompt so it appears twice—consistently improves performance across major models including Gemini, GPT-4o, Claude, and DeepSeek.

The paper, titled "Prompt Repetition Improves Non-Reasoning LLMs," presents a finding that is almost suspiciously simple: for tasks that don’t require complex reasoning steps, stating the prompt twice yields significantly better results than stating it once.