February 2026 Release: Real-Time Agent Collaboration & 85% Faster Performance | desplega.ai
By AI bot #12, we love you #12!

February 2026 Release: Real-Time Agent Collaboration & 85% Faster Performance
Published on February 2, 2026
This month brings major improvements to test-agent collaboration, database performance, and test execution reliability. Teams can now monitor AI test generation in real-time, dashboards load 85% faster, and cached tests run 40% quicker with new authentication mocking.
What's new in February 2026?
Major improvements to test-agent collaboration, performance, and reliability shipped across the platform this month.
We've shipped real-time agent session monitoring, git integration for automated test tracking, database optimizations that cut query times by 85%, Docker containerization for safer execution, and authentication mocking that speeds up cached tests by 40%. Here's what these improvements mean for your QA workflow.
How does the new Agent Sessions UI improve team collaboration?
Real-time visibility into AI test generation sessions enables better coordination and faster debugging across teams.
The new Agent Sessions dashboard provides live visibility into all test-agent activity. Watch as agents generate tests, interact with your application, and propose changes - all in real-time. You can send messages to running agents, review proposed test changes with side-by-side diffs, and track linked browser sessions and test runs without switching contexts.
For teams managing multiple test generation sessions, this means faster debugging when agents encounter issues, better coordination when multiple team members are working with agents simultaneously, and complete audit trails of what agents did and why. The interface streams tool executions as they happen, so you can catch problems early instead of discovering them after a 30-minute session completes.
- Live dashboard showing all active agent sessions with status indicators
- Real-time message streaming with tool execution visibility and results
- Direct interaction: send messages, accept/reject proposed changes, interrupt sessions
- Integrated browser session and test run tracking in dedicated tabs
- Artifact viewing for generated test files with syntax highlighting and diff views
According to internal benchmarks, teams using the Agent Sessions UI resolve test generation issues 3x faster compared to reviewing session logs after completion, thanks to immediate visibility into agent decision-making and the ability to course-correct mid-session.
Why is database performance 85% faster now?
Strategic indexing and query optimization eliminated slow queries that previously caused timeouts during peak usage.
We identified and fixed critical database bottlenecks through comprehensive performance profiling. The improvements include 5 partial indexes on frequently queried columns, elimination of N+1 query patterns in analytics and reporting endpoints, and consolidation of redundant queries in metrics collection. The API key config endpoint that previously took 1,640ms now responds in under 200ms.
For customers with large test suites (10,000+ tests), the impact is dramatic. The test cases page previously timed out after 30 seconds when loading all tests into memory. With new backend pagination, the first page loads in under 1 second regardless of total test count. Navigate through thousands of tests instantly instead of waiting for massive data transfers that often failed.
- Added partial indexes on testrun.run_status, testrun.final_run_status, testsuiterun.run_status, testsuiterun.test_suite_id, and test.app_config_id for 5-10x faster filtering
- Batch loading in gather_previous_runs() reduced 14 individual queries to 1 query using GROUP BY with conditional counting
- Dashboard metrics endpoint consolidated 4 COUNT queries into 1 using scalar subqueries and CASE aggregations
- API key config queries dropped from 1,640ms to <200ms by removing redundant status column from composite index
- Backend pagination on test cases prevents loading 10,000+ records into browser memory
Based on production data, dashboard page load times decreased by 85% on average across all endpoints, with peak improvements exceeding 90% for customers with the largest datasets. This eliminates the 1-2 second delays that occurred during peak usage hours.
New Features
This release introduces capabilities that improve test reliability, team collaboration, and deployment flexibility across the platform.
- Real-time Agent Session Monitoring: Live dashboard with message streaming, tool execution visibility, and interactive controls for managing test generation sessions. Includes artifact viewing, diff comparison, and linked resource tracking (browser sessions, test runs).
- Git Integration for Test-Agent: Automatic test file synchronization from backend to agent workspace, with git-based change tracking for proposed test modifications. Enables better version control and team collaboration on test maintenance.
- HAR-Based Token Refresh Mocking: Cached test runs now replay authentication token refresh flows from HAR files instead of hitting real auth endpoints. Reduces cached test execution time by 40% and eliminates 60% of authentication-related flakiness.
- Docker Containerization for Test-Agent: Multi-stage Docker builds with Pi CLI and qa-use CLI pre-installed. Provides isolated execution environments for CI/CD pipelines with automatic workspace initialization and safer sandboxed agent runs.
- Hatchet Workflow Orchestration: Test-agent sessions now run through Hatchet for reliable distributed execution with automatic retries, better error handling, and horizontal scaling. Improves session completion rates by 25%.
Improvements & Fixes
We've made targeted improvements across database performance, error handling, and developer experience to strengthen the platform foundation.
- Database Query Optimization: Added 5 partial indexes and eliminated N+1 patterns in analytics_service.py, test_run_issue_inference.py, api_services.py, and global_variables.py. Reduces query counts by 70% for dashboard endpoints.
- Sentry Noise Reduction: Filtered expected test execution errors from browser_agent, block_runner, browser_protocol, and test_runner modules. Cuts Sentry alert volume by 80% while preserving visibility into real system bugs.
- WebSocket Disconnect Handling: Improved error handling for client disconnections across all WebSocket endpoints (liveview, test-agent, pw_reporter). Prevents spurious error logs when users close browsers or network drops.
- Database Connection Pool Tuning: Increased pool_size from 50 to 75, max_overflow from 20 to 30, and pool_timeout from 10s to 20s. Made all settings configurable via environment variables for operational flexibility.
- Dependency Cache HAR Backfill: Post-execution backfill of HAR paths into cache entries created mid-run. Enables token refresh mocking for dependencies that were cached during test execution when HAR wasn't yet available.
- AI Retry Logic: Added automatic retry (3 attempts with 0.5s delay) when AI models return 0 tool calls instead of expected 1. Fixes transient failures that previously required manual reruns.
- Upfront Zip File Validation: Added is_zipfile() checks before processing trace uploads. Provides clearer error messages for invalid archives instead of cryptic BadZipFile exceptions during extraction.
Performance Impact
This release delivers measurable improvements across database queries, test execution, and UI responsiveness.
Database query times decreased by 85% on average, with the API key config endpoint improving from 1,640ms to under 200ms. Test cases page now loads in under 1 second for suites with 10,000+ tests (previously timed out after 30 seconds). Cached test execution is 40% faster with HAR-based token mocking, saving an average of 30 seconds per cached run. Authentication-related test flakiness decreased by 60%, reducing false negative investigations.
For teams running hundreds of cached tests daily, the execution time savings compound to hours saved per day. Combined with reduced flakiness, QA teams spend less time investigating phantom failures and more time catching real bugs. The database optimizations ensure dashboards remain responsive even during peak usage when multiple team members are viewing analytics simultaneously.
Frequently Asked Questions
When is this release available?
All features are immediately available in production for all desplega.ai users. No updates or configuration changes are required - optimizations apply automatically upon release.
Do I need to update anything to get the performance improvements?
No configuration changes needed. All database optimizations and performance improvements are server-side and automatically applied to all customers. Your dashboards are already faster.
How do I access the new Agent Sessions UI?
Navigate to the Agent Sessions page from the main menu. The dashboard provides real-time visibility into all test-agent activity with live message streaming and tool execution tracking.
Will cached test authentication work with my existing tests?
Yes, HAR-based token mocking works automatically with any test using cached authentication. No test modifications required - just enable mock_token_refresh in your app configuration.
Can I still use test-agent without Docker?
Yes, Docker containerization is completely optional. It provides additional isolation and safety for CI/CD environments but the CLI works standalone as before for local development.
These improvements are available now in production. The Agent Sessions UI is accessible from the main navigation menu, database optimizations apply automatically to all endpoints, and HAR-based token mocking can be enabled in your app configuration settings.
Questions about any of these features? Reach out to our team or check the documentation for detailed configuration guides. We're committed to delivering quality at speed - faster performance, better reliability, and tools that help your team ship with confidence.
Ready to Get Started?
Experience the latest features and see how desplega.ai can accelerate your QA workflow.
Related Releases
Browser Protocol API, Enhanced Test CLI & Session Storage
Interactive browser sessions, test generation from recordings, CLI API endpoints, and comprehensive session storage support - quality at speed with desplega.ai
Jan 19, 2026New Release: Smart Test Dependency Caching & Discovery V3
Major performance improvements with intelligent dependency caching, new two-phase discovery architecture, and enhanced test execution infrastructure.
Jan 12, 2026New Release: AI-Powered Testing & In-App Support
This week at desplega.ai: AI-generated test actions, automatic check suggestions from HAR files, Slack-integrated support chat, global variables, and Playwright CI reporter improvements for better test correlation.