Skip to content

February Sprint: pip Install, Autonomous Agents, and Channels#

February was a productive month. We shipped three things that fundamentally change how LIT Platform works, and we landed on a framing that makes it easier to explain what we're building.

pip install#

The biggest friction point for new users was the Docker-based install. It made sense when LIT Platform was a multi-tenant server product, but we've been running in single-user mode for months and the install story hadn't caught up.

Now it has. LIT Platform ships as a standard Python wheel:

pip install https://github.com/Positronic-AI/lit-releases/releases/download/v0.1.24/positronic_lit-0.1.24-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
lit serve

Open http://localhost:8080 and the setup wizard connects your AI provider. That's it. No Docker, no GPU, no cloud account required.

We support Linux, macOS (Apple Silicon and Intel), and Windows. Python 3.10–3.13 on Linux and macOS; 3.10–3.11 recommended on Windows for GPU training compatibility.

The setup wizard supports three providers: Claude (our recommended default), Gemini (free API key, no additional software), and ChatGPT. Each provider has a different auth flow — the wizard handles all of it.

The full install guide is here.

Autonomous Agents (Heartbeat)#

The second thing we shipped is the feature I'm most excited about.

An autonomous agent in LIT Platform has a heartbeat — it wakes up on a schedule, does its work, and posts results to a channel or DM. Between cycles it sleeps, consuming no tokens. You don't initiate the conversation; the agent does.

Here's what that looks like in practice: I have an agent that runs every hour, reads the training logs from an ongoing experiment, and posts a summary to #model-training. When a run fails or a metric degrades unexpectedly, it flags it. I don't poll the logs — the agent does that for me.

The key design decision we got right: the heartbeat output is a first-class conversation, not a log dump. The agent's reasoning, tool calls, and final summary stream into the channel in real time, exactly the way a regular chat response does. You can reply in the thread. The agent has context on what it said in prior cycles. It's a colleague checking in, not a cron job writing to stdout.

We considered building a separate "reporting" abstraction for heartbeat output but decided against it. The one-pipe principle: one direct path from agent output to the channel. No translation layer. The agent's natural response is the deliverable.

Read more about autonomous agents →

Channels & Direct Messaging#

Heartbeat needed somewhere to post. So we built channels.

A channel is a persistent, named workspace. You create #model-training, assign an agent, and everything — your messages, the agent's responses, heartbeat cycle outputs — accumulates there in chronological order. It looks like Slack because that's the mental model people already have.

Direct Messages work the same way but one-on-one: you and a specific agent, in a thread that persists across all your conversations with that agent.

The organizational value is real. Before channels, data science work in LIT produced a flat list of sessions that was hard to navigate after a few weeks. Now the structure mirrors how the work actually happens: projects, not conversations.

On multi-user deployments, channels can be shared across the team. Everyone sees the same history. The AI's work product stops being siloed to the person who initiated the chat.

Read more about channels and DM →

Vibe Data Science#

Vibe coding — AI-assisted software development — has become a standard term. We've been doing the equivalent for data science workflows and haven't named it well.

"Vibe data science" is the frame: AI agents embedded in your data science workflow, running continuously, reaching out when something needs attention, accumulating context over weeks of work. Not a chatbot you query. A collaborative environment where the AI is a participant, not a tool.

We call it vibe data science because that's where we built it and where it produces the sharpest results. The features generalize — channels, heartbeat, and multi-agent sessions are useful for any collaborative AI work — but we're not trying to compete with general-purpose AI platforms. We're going deep on the data science and ML use case.

This framing also matters for the teams we work with. We build the platform; they build their products on top of it. LIT is infrastructure, not competition.

What's Next#

  • Better scheduling: cron-expression scheduling for heartbeat agents, not just fixed intervals
  • Channel notifications: push notifications when an agent posts to a channel you're watching
  • Multi-model routing: route heartbeat cycles to a cheaper model by default, escalate to a better model when something warrants it
  • pip install on PyPI: the wheel-from-GitHub-URL install works but isn't pretty. Moving to a proper PyPI package.

If you're running LIT Platform and have feedback on any of the February features, reach out — contact@lit.ai or open an issue on GitHub.