Skip to content

Spring Boot

Auto-configuration for running Atmosphere on Spring Boot 4.0.5 (Spring Framework 6.2.8). Registers AtmosphereServlet, wires Spring DI into Atmosphere’s object factory, and exposes AtmosphereFramework and RoomManager as Spring beans.

<dependency>
<groupId>org.atmosphere</groupId>
<artifactId>atmosphere-spring-boot-starter</artifactId>
<version>${project.version}</version>
</dependency>

Spring Boot 4.0 splits several modules into separate artifacts. When you depend on features that used to live in the main spring-boot jar, you may need to add them explicitly:

FeatureArtifact
Servlet supportorg.springframework.boot:spring-boot-servlet
Embedded web serverorg.springframework.boot:spring-boot-web-server
Actuator health indicatororg.springframework.boot:spring-boot-health

The Atmosphere starter depends on spring-boot-servlet transitively. Add spring-boot-health explicitly if you want the AtmosphereHealthIndicator to be picked up.

Spring Boot 4.0 ships with SLF4J 2.x. If your build inherits from a parent POM that pins an older SLF4J 1.x or Logback 1.2.x, you must override both in <dependencies> (not just dependencyManagement) for the starter to work.

atmosphere:
packages: com.example.chat
@ManagedService(path = "/atmosphere/chat")
public class Chat {
@Inject
private BroadcasterFactory factory;
@Inject
private AtmosphereResource r;
@Ready
public void onReady() { }
@Disconnect
public void onDisconnect() { }
@Message(encoders = {JacksonEncoder.class}, decoders = {JacksonDecoder.class})
public Message onMessage(Message message) {
return message;
}
}

No additional configuration is needed beyond a standard @SpringBootApplication class.

All properties are under the atmosphere.* prefix:

PropertyDefaultDescription
atmosphere.packages(none)Comma-separated packages to scan for Atmosphere annotations
atmosphere.servlet-path/atmosphere/*Servlet URL mapping
atmosphere.session-supportfalseEnable HTTP session support
atmosphere.websocket-support(auto)Explicitly enable/disable WebSocket
atmosphere.broadcaster-class(default)Custom Broadcaster implementation FQCN
atmosphere.broadcaster-cache-class(default)Custom BroadcasterCache implementation FQCN
atmosphere.heartbeat-interval(default)Server heartbeat frequency (Duration string, e.g. 30s)
atmosphere.order0Servlet load-on-startup order
atmosphere.init-params(none)Map of any ApplicationConfig key/value
  • AtmosphereServlet — the servlet instance
  • AtmosphereFramework — the framework for programmatic configuration
  • RoomManager — the room API for presence and message history
  • AtmosphereHealthIndicator — Actuator health check (when spring-boot-health is on the classpath)
  • AtmosphereAiAutoConfiguration — scans for @AiEndpoint / @Agent beans and wires the resolved AgentRuntime (built-in, Spring AI, LangChain4j, ADK, Embabel, or Koog)
  • AtmosphereAdminAutoConfiguration / AtmosphereActuatorAutoConfiguration / AtmosphereAuthAutoConfiguration — admin console, actuator metrics, and basic auth (opt-in via atmosphere.admin.*, atmosphere.actuator.*, atmosphere.auth.*)

When atmosphere-ai is on the classpath, the starter auto-discovers the best available AgentRuntime via ServiceLoader (LangChain4j, Spring AI, ADK, Embabel, Koog, or the built-in OpenAI-compatible client) and scans for @AiEndpoint/@Agent beans.

atmosphere:
ai:
mode: remote # remote | local
model: gemini-2.5-flash
base-url: # optional, auto-derived from model
api-key: ${GEMINI_API_KEY}

Environment variables LLM_MODE, LLM_MODEL, LLM_BASE_URL, and LLM_API_KEY override these properties when the starter runs outside Spring configuration. See the AI reference for the full AgentRuntime SPI.

The starter can launch a gRPC server alongside the servlet container when atmosphere-grpc is on the classpath:

atmosphere:
grpc:
enabled: true
port: 9090
enable-reflection: true
PropertyDefaultDescription
atmosphere.grpc.enabledfalseEnable gRPC transport server
atmosphere.grpc.port9090gRPC server port
atmosphere.grpc.enable-reflectiontrueEnable gRPC server reflection

Define a GrpcHandler bean to handle gRPC events:

@Bean
public GrpcHandler grpcHandler() {
return new GrpcHandlerAdapter() {
@Override
public void onOpen(GrpcChannel channel) {
log.info("gRPC client connected: {}", channel.uuid());
}
@Override
public void onMessage(GrpcChannel channel, String message) {
log.info("gRPC message: {}", message);
}
};
}

Add opentelemetry-api to your classpath and provide an OpenTelemetry bean — the starter automatically registers AtmosphereTracing:

<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-api</artifactId>
</dependency>

Every Atmosphere request generates a trace span with transport, resource UUID, broadcaster, and action attributes. Disable with atmosphere.tracing.enabled=false.

When atmosphere-mcp is also on the classpath, an McpTracing bean is auto-created for MCP tool/resource/prompt call tracing.

When micrometer-core and MeterRegistry are on the classpath, the starter registers atmosphere.connections, atmosphere.messages, and atmosphere.broadcasters gauges.

The starter includes AtmosphereRuntimeHints for native image support:

Terminal window
cd samples/spring-boot-chat && ../../mvnw -Pnative package
./target/atmosphere-spring-boot-chat-*

Requires GraalVM JDK 21+ (Spring Boot 4.0.5 / Spring Framework 6.2.8 baseline).

@AiEndpoint annotation surfaces (new in 4.0.36)

Section titled “@AiEndpoint annotation surfaces (new in 4.0.36)”

Spring Boot’s @AiEndpoint annotation gained two declarative attributes in 4.0.36 that let you configure prompt caching and per-request retry without touching AgentExecutionContext directly.

@AiEndpoint.promptCache — prompt caching policy

Section titled “@AiEndpoint.promptCache — prompt caching policy”

Attach a CacheHint.CachePolicy to every request produced by an endpoint. The pipeline seeds each request’s CacheHint before dispatching to the runtime — Spring AI, LangChain4j, and the Built-in OpenAI path emit prompt_cache_key on the wire, and the pipeline-level ResponseCache also honors the hint regardless of runtime.

@AiEndpoint(
path = "/ai/chat",
systemPrompt = "You are a helpful assistant",
promptCache = CacheHint.CachePolicy.CONSERVATIVE
)
public class AiChat {
@Prompt
public void onPrompt(String message, StreamingSession session) {
session.stream(message);
}
}

Three policy values:

  • CachePolicy.NONE (default) — no caching hint
  • CachePolicy.CONSERVATIVE — short TTL (30 min), only cache if the prefix is identical
  • CachePolicy.AGGRESSIVE — longer TTL (24 h), cache any semantically similar prefix

The policy is endpoint-scoped. To set the cache hint per request, use context.withCacheHint() directly.

@AiEndpoint.retry — per-request retry policy

Section titled “@AiEndpoint.retry — per-request retry policy”

Override the client-level retry policy on a per-endpoint basis. Useful when a particular endpoint needs tighter or looser semantics than the global default (for example, a strict endpoint that must fail fast, or a best-effort background endpoint that can retry aggressively).

@AiEndpoint(
path = "/ai/strict",
systemPrompt = "You are a mission-critical assistant",
retry = @Retry(maxRetries = 0)
)
public class StrictChat {
@Prompt
public void onPrompt(String message, StreamingSession session) {
session.stream(message); // fails fast — no retries on transient errors
}
}
@AiEndpoint(
path = "/ai/background",
retry = @Retry(maxRetries = 5, initialDelayMs = 2000, backoffMultiplier = 2.0)
)
public class BackgroundChat {
@Prompt
public void onPrompt(String message, StreamingSession session) {
session.stream(message); // retries up to 5 times with exponential backoff
}
}

@Retry attributes:

  • maxRetries — sentinel -1 means “inherit client-level default”, 0 disables retries, 1+ retries
  • initialDelayMs — base delay before the first retry (default 1000)
  • maxDelayMs — cap on exponential backoff (default 30000)
  • backoffMultiplier — exponential factor (default 2.0)

Runtime coverage: per-request retry is Built-in only in 4.0.36. Framework runtimes (Spring AI, LangChain4j, ADK, Koog, Embabel, Semantic Kernel) inherit their native retry layers and ignore the per-request override. The Built-in runtime threads context.retryPolicy() into OpenAiCompatibleClient.sendWithRetry as a real override. See the per-runtime capability matrix for the full breakdown.