In partnership with

Your AI Agent Just Joined the Meeting

Welcome back. Three things landed this week that are worth your time.

The pace of tooling right now is genuinely hard to keep up with. But every so often, a week drops a few things that actually change how you work rather than just adding another tab to your browser. This was one of those weeks.

Three separate releases landed within days of each other: an AI agent that can join your Google Meet calls in real time, a GitHub Copilot feature that runs parallel coding agents simultaneously, and a free open-source tool that does everything Screen Studio charges $89 for. Let's get into it.

Big Drop · April 3, 2026

Pika Labs Just Put a Face on Your AI Agent

Pika Labs shipped the beta of PikaStream1.0, a real-time video chat skill that works with any AI agent, including Claude, OpenClaw, and their own Pika AI Self. The premise is simple: you send a Google Meet invite to your agent, and it shows up to the call with a face, a voice, and full access to its memory and personality.

This isn't a demo or a concept. People are already using it to have agents book appointments live on calls. The agent maintains context throughout the conversation and can execute tasks during the call itself, not just respond to prompts afterward.

The post from @pika_labs crossed 2.1 million views in under 48 hours. That reaction alone tells you this hit differently.


What it means: client calls, support calls, onboarding flows, agents can now participate in real time, not just respond after the fact.

What This Actually Changes About Work Calls

Right now, most people use AI by switching between their chat window and whatever they're working on. PikaStream1.0 collapses that gap. The agent is present in the conversation, not sitting in another tab.

For people running client calls, sales demos, or support flows, this is significant. You stop switching context. The agent is just there. What Pika has done is give the agent a persistent presence in the meeting rather than making it a background tool you consult during breaks.

Whether this becomes a standard feature across other platforms or stays a Pika-exclusive for a while is unclear. But the direction is obvious: agents are moving from text boxes into live environments.

GitHub Copilot · CLI Update

/fleet Runs Multiple Coding Agents at Once

The new /fleet command in GitHub Copilot CLI lets you break a large task into independent subtasks and run them in parallel. Each subagent gets an isolated context window. They share the filesystem but don't communicate with each other. The orchestrator handles coordination.

You can also define custom agents by dropping config files into .github/agents/ in your repo. Each agent can have its own model, toolset, and instructions.

Best use case: parallel tracks on the same codebase, like building an API layer, UI components, and config simultaneously. For linear, single-file work, the overhead isn't worth it. Full details here →

Running Agents in Parallel Is a Different Way of Thinking

The instinct most people have with coding tools is to work one thing at a time: finish the API, then build the UI, then update the config. /fleet pushes you to think about your work as a dependency graph instead.

Tasks with no dependencies on each other can run simultaneously. That's not a new concept in software engineering, but having it available at the task-orchestration level inside a CLI tool is new for most day-to-day workflows.

The honest caveat: this pays off on genuinely large, multi-track work. If you're editing a single component or fixing a scoped bug, spinning up a fleet adds noise, not speed. The skill is knowing when to reach for it.

Turn AI into Your Income Engine

Ready to transform artificial intelligence from a buzzword into your personal revenue generator?

HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.

Inside you'll discover:

  • A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential

  • Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background

  • Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve

Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.

Open Source · Free · MIT

OpenScreen

The free, open-source answer to Screen Studio's $89 price tag.

Auto cursor zoom - follows every click automatically, no manual keyframes

Motion blur and smooth transitions - pan and zoom look intentional, not recorded

Webcam overlay, backgrounds, annotations - full post-production inside the app

Windows, macOS, and Linux - no watermarks, no account, no subscription

Fork: Recordly - native pipeline, same zero cost

View on HN →

Why OpenScreen Matters Beyond the Price Tag

The tooling gap between "professional-looking demo" and "raw screen recording" has always cost something. Either you paid for Screen Studio or Loom, hired someone to edit, or shipped recordings that looked rough around the edges.

OpenScreen removes that gap entirely. The auto-zoom that follows your cursor and clicks is the specific feature that makes demos look intentional rather than captured. It's the difference between a recording that looks like a tutorial and one that looks like a product.

The 8,400+ GitHub stars it picked up and its run on the Hacker News front page on April 4 suggest the timing is right. Developers and creators have been looking for exactly this.

On The Radar · April 2026

Google Model Release

Gemini 3.1 Flash-Lite is now in developer preview

Google shipped Gemini 3.1 Flash-Lite on March 3 via AI Studio and Vertex AI. It runs 2.5x faster Time to First Token and 45% higher output speed than Gemini 2.5 Flash. Pricing is $0.25 per million input tokens and $1.50 per million output tokens, cheaper than Gemini 3.1 Pro at $2.00 input. It scored 1432 on the Arena.ai leaderboard. Worth a test if you are building anything latency-sensitive.

OpenAI Integrations Update

ChatGPT can now write directly to Notion, Linear, and Dropbox

OpenAI updated its Box, Notion, Linear, and Dropbox connectors with write capabilities on March 28. ChatGPT could previously only read from these tools. It can now create and update content directly inside them mid-session. For documentation, research, or project management workflows, this removes the step that used to require copy-pasting or a separate Zapier trigger.

Salesforce Acquisition

Salesforce bought Momentum to plug meeting audio into Agentforce

Salesforce signed to acquire Momentum on February 18, its second acquisition of 2026. Momentum pulls insights from Zoom and Google Meet calls and converts them into structured data for agentic workflows. The goal is letting Agentforce 360 act on what gets said in meetings, not just what gets typed into CRM fields. Deal expected to close in Q1 FY2027.

Stay sharp,
Better Every Day

Reply

Avatar

or to participate

Keep Reading