Spring Boot
Spring Boot Integration
Section titled “Spring Boot Integration”Auto-configuration for running Atmosphere on Spring Boot 4.0.5 (Spring Framework 6.2.8). Registers AtmosphereServlet, wires Spring DI into Atmosphere’s object factory, and exposes AtmosphereFramework and RoomManager as Spring beans.
Maven Coordinates
Section titled “Maven Coordinates”<dependency> <groupId>org.atmosphere</groupId> <artifactId>atmosphere-spring-boot-starter</artifactId> <version>${project.version}</version></dependency>Spring Boot 4.0 modularization
Section titled “Spring Boot 4.0 modularization”Spring Boot 4.0 splits several modules into separate artifacts. When you depend on features that used to live in the main spring-boot jar, you may need to add them explicitly:
| Feature | Artifact |
|---|---|
| Servlet support | org.springframework.boot:spring-boot-servlet |
| Embedded web server | org.springframework.boot:spring-boot-web-server |
| Actuator health indicator | org.springframework.boot:spring-boot-health |
The Atmosphere starter depends on spring-boot-servlet transitively. Add spring-boot-health explicitly if you want the AtmosphereHealthIndicator to be picked up.
SLF4J / Logback override
Section titled “SLF4J / Logback override”Spring Boot 4.0 ships with SLF4J 2.x. If your build inherits from a parent POM that pins an older SLF4J 1.x or Logback 1.2.x, you must override both in <dependencies> (not just dependencyManagement) for the starter to work.
Quick Start
Section titled “Quick Start”application.yml
Section titled “application.yml”atmosphere: packages: com.example.chatChat.java
Section titled “Chat.java”@ManagedService(path = "/atmosphere/chat")public class Chat {
@Inject private BroadcasterFactory factory;
@Inject private AtmosphereResource r;
@Ready public void onReady() { }
@Disconnect public void onDisconnect() { }
@Message(encoders = {JacksonEncoder.class}, decoders = {JacksonDecoder.class}) public Message onMessage(Message message) { return message; }}No additional configuration is needed beyond a standard @SpringBootApplication class.
Configuration Properties
Section titled “Configuration Properties”All properties are under the atmosphere.* prefix:
| Property | Default | Description |
|---|---|---|
atmosphere.packages | (none) | Comma-separated packages to scan for Atmosphere annotations |
atmosphere.servlet-path | /atmosphere/* | Servlet URL mapping |
atmosphere.session-support | false | Enable HTTP session support |
atmosphere.websocket-support | (auto) | Explicitly enable/disable WebSocket |
atmosphere.broadcaster-class | (default) | Custom Broadcaster implementation FQCN |
atmosphere.broadcaster-cache-class | (default) | Custom BroadcasterCache implementation FQCN |
atmosphere.heartbeat-interval | (default) | Server heartbeat frequency (Duration string, e.g. 30s) |
atmosphere.order | 0 | Servlet load-on-startup order |
atmosphere.init-params | (none) | Map of any ApplicationConfig key/value |
Auto-Configured Beans
Section titled “Auto-Configured Beans”AtmosphereServlet— the servlet instanceAtmosphereFramework— the framework for programmatic configurationRoomManager— the room API for presence and message historyAtmosphereHealthIndicator— Actuator health check (whenspring-boot-healthis on the classpath)AtmosphereAiAutoConfiguration— scans for@AiEndpoint/@Agentbeans and wires the resolvedAgentRuntime(built-in, Spring AI, LangChain4j, ADK, Embabel, or Koog)AtmosphereAdminAutoConfiguration/AtmosphereActuatorAutoConfiguration/AtmosphereAuthAutoConfiguration— admin console, actuator metrics, and basic auth (opt-in viaatmosphere.admin.*,atmosphere.actuator.*,atmosphere.auth.*)
AI Auto-Configuration
Section titled “AI Auto-Configuration”When atmosphere-ai is on the classpath, the starter auto-discovers the best available AgentRuntime via ServiceLoader (LangChain4j, Spring AI, ADK, Embabel, Koog, or the built-in OpenAI-compatible client) and scans for @AiEndpoint/@Agent beans.
atmosphere: ai: mode: remote # remote | local model: gemini-2.5-flash base-url: # optional, auto-derived from model api-key: ${GEMINI_API_KEY}Environment variables LLM_MODE, LLM_MODEL, LLM_BASE_URL, and LLM_API_KEY override these properties when the starter runs outside Spring configuration. See the AI reference for the full AgentRuntime SPI.
gRPC Transport
Section titled “gRPC Transport”The starter can launch a gRPC server alongside the servlet container when atmosphere-grpc is on the classpath:
atmosphere: grpc: enabled: true port: 9090 enable-reflection: true| Property | Default | Description |
|---|---|---|
atmosphere.grpc.enabled | false | Enable gRPC transport server |
atmosphere.grpc.port | 9090 | gRPC server port |
atmosphere.grpc.enable-reflection | true | Enable gRPC server reflection |
Define a GrpcHandler bean to handle gRPC events:
@Beanpublic GrpcHandler grpcHandler() { return new GrpcHandlerAdapter() { @Override public void onOpen(GrpcChannel channel) { log.info("gRPC client connected: {}", channel.uuid()); } @Override public void onMessage(GrpcChannel channel, String message) { log.info("gRPC message: {}", message); } };}Observability
Section titled “Observability”OpenTelemetry Tracing (Auto-Configured)
Section titled “OpenTelemetry Tracing (Auto-Configured)”Add opentelemetry-api to your classpath and provide an OpenTelemetry bean — the starter automatically registers AtmosphereTracing:
<dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-api</artifactId></dependency>Every Atmosphere request generates a trace span with transport, resource UUID, broadcaster, and action attributes. Disable with atmosphere.tracing.enabled=false.
When atmosphere-mcp is also on the classpath, an McpTracing bean is auto-created for MCP tool/resource/prompt call tracing.
Micrometer Metrics (Auto-Configured)
Section titled “Micrometer Metrics (Auto-Configured)”When micrometer-core and MeterRegistry are on the classpath, the starter registers atmosphere.connections, atmosphere.messages, and atmosphere.broadcasters gauges.
GraalVM Native Image
Section titled “GraalVM Native Image”The starter includes AtmosphereRuntimeHints for native image support:
cd samples/spring-boot-chat && ../../mvnw -Pnative package./target/atmosphere-spring-boot-chat-*Requires GraalVM JDK 21+ (Spring Boot 4.0.5 / Spring Framework 6.2.8 baseline).
@AiEndpoint annotation surfaces (new in 4.0.36)
Section titled “@AiEndpoint annotation surfaces (new in 4.0.36)”Spring Boot’s @AiEndpoint annotation gained two declarative attributes in 4.0.36 that let you configure prompt caching and per-request retry without touching AgentExecutionContext directly.
@AiEndpoint.promptCache — prompt caching policy
Section titled “@AiEndpoint.promptCache — prompt caching policy”Attach a CacheHint.CachePolicy to every request produced by an endpoint. The pipeline seeds each request’s CacheHint before dispatching to the runtime — Spring AI, LangChain4j, and the Built-in OpenAI path emit prompt_cache_key on the wire, and the pipeline-level ResponseCache also honors the hint regardless of runtime.
@AiEndpoint( path = "/ai/chat", systemPrompt = "You are a helpful assistant", promptCache = CacheHint.CachePolicy.CONSERVATIVE)public class AiChat {
@Prompt public void onPrompt(String message, StreamingSession session) { session.stream(message); }}Three policy values:
CachePolicy.NONE(default) — no caching hintCachePolicy.CONSERVATIVE— short TTL (30 min), only cache if the prefix is identicalCachePolicy.AGGRESSIVE— longer TTL (24 h), cache any semantically similar prefix
The policy is endpoint-scoped. To set the cache hint per request, use context.withCacheHint() directly.
@AiEndpoint.retry — per-request retry policy
Section titled “@AiEndpoint.retry — per-request retry policy”Override the client-level retry policy on a per-endpoint basis. Useful when a particular endpoint needs tighter or looser semantics than the global default (for example, a strict endpoint that must fail fast, or a best-effort background endpoint that can retry aggressively).
@AiEndpoint( path = "/ai/strict", systemPrompt = "You are a mission-critical assistant", retry = @Retry(maxRetries = 0))public class StrictChat { @Prompt public void onPrompt(String message, StreamingSession session) { session.stream(message); // fails fast — no retries on transient errors }}
@AiEndpoint( path = "/ai/background", retry = @Retry(maxRetries = 5, initialDelayMs = 2000, backoffMultiplier = 2.0))public class BackgroundChat { @Prompt public void onPrompt(String message, StreamingSession session) { session.stream(message); // retries up to 5 times with exponential backoff }}@Retry attributes:
maxRetries— sentinel-1means “inherit client-level default”, 0 disables retries, 1+ retriesinitialDelayMs— base delay before the first retry (default1000)maxDelayMs— cap on exponential backoff (default30000)backoffMultiplier— exponential factor (default2.0)
Runtime coverage: per-request retry is Built-in only in 4.0.36. Framework runtimes (Spring AI, LangChain4j, ADK, Koog, Embabel, Semantic Kernel) inherit their native retry layers and ignore the per-request override. The Built-in runtime threads context.retryPolicy() into OpenAiCompatibleClient.sendWithRetry as a real override. See the per-runtime capability matrix for the full breakdown.
Samples
Section titled “Samples”- Spring Boot Chat — rooms, presence, REST API, Micrometer metrics, Actuator health
- Spring Boot AI Chat — built-in AI client
- Spring Boot MCP Server — MCP tools, resources, prompts
- Spring Boot OTel Chat — OpenTelemetry tracing with Jaeger