Skip to content
Home Interview Spring Boot Interview Questions: Core Concepts Explained With Depth

Spring Boot Interview Questions: Core Concepts Explained With Depth

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Java Interview → Topic 6 of 6
Spring Boot interview questions explained with real-world context, code examples, and the WHY behind each answer — not just definitions to memorize.
⚙️ Intermediate — basic Interview knowledge assumed
In this tutorial, you'll learn
Spring Boot interview questions explained with real-world context, code examples, and the WHY behind each answer — not just definitions to memorize.
  • Auto-configuration is conditional, not magical — it uses @ConditionalOnClass, @ConditionalOnMissingBean, and @ConditionalOnProperty guards. Your explicit bean definitions always win. The Conditions Evaluation Report (--debug) shows exactly what activated and why.
  • Constructor injection is not just a style preference — it enables immutability with final fields, makes dependencies explicit to callers, and makes unit tests trivial without starting a Spring context. Field injection actively hides dependencies and forces tests to use Spring.
  • @ConfigurationProperties over @Value for any group of related configuration — you get type-safe binding, JSR-303 validation that fails fast at startup, and IDE autocomplete. @Value produces silent runtime failures when properties are missing or renamed.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • @SpringBootApplication combines @SpringBootConfiguration + @EnableAutoConfiguration + @ComponentScan — three annotations in one
  • Auto-configuration uses @ConditionalOnClass and @ConditionalOnMissingBean guards — your explicit beans always win
  • Constructor injection is the right default — it enables immutability, makes dependencies explicit, and makes unit tests trivial
  • Injecting a prototype bean into a singleton captures it once at startup — use ObjectProvider for fresh instances each time
  • Run with --debug flag to print the Conditions Evaluation Report — shows exactly why each auto-configuration fired or was skipped
  • @ConfigurationProperties over @Value for grouped config — type safety, validation, and IDE autocomplete for free
🚨 START HERE
Spring Boot Production Debugging Cheat Sheet
Quick-reference commands for diagnosing Spring Boot production issues. Each entry maps a specific observable symptom to the exact commands that get you to root cause fastest.
🟡Application fails to start — port already in use or Spring context fails to load
Immediate ActionCheck which process holds the port and read the full startup exception from the log — the first exception in the stack, not the last
Commands
lsof -i :8080
tail -100 /var/log/app/startup.log | grep -A 20 'APPLICATION FAILED TO START'
Fix NowKill the conflicting process (kill -9 <PID> from lsof output) or change server.port in application.properties. For context load failures, the root cause is always in the first nested exception — scroll past the wrapping exceptions to find it.
🔴Application runs out of memory — OOMKill in Kubernetes or OutOfMemoryError in logs
Immediate ActionCapture a heap dump before the process dies — this is your only window into what was in memory at the time of the failure
Commands
jcmd $(pgrep -f spring-boot) GC.heap_dump /tmp/heapdump.hprof
kubectl describe pod <pod-name> | grep -A 10 'Last State'
Fix NowOpen the heap dump in Eclipse MAT and check the dominator tree — the top object by retained heap is almost always the leak source. Add -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp to get automatic dumps on the next occurrence. Increase -Xmx as a temporary measure while investigating the root cause.
🟠High GC pause times — application latency spikes correlate with garbage collection cycles
Immediate ActionCheck GC throughput and identify which heap generation is under pressure before tuning anything
Commands
jstat -gcutil $(pgrep -f spring-boot) 1000 10
jcmd $(pgrep -f spring-boot) GC.heap_info
Fix NowIf Old generation (O column in jstat) is consistently above 80%, you have a slow memory leak or objects being promoted too aggressively. Switch to G1GC with -XX:+UseG1GC if not already using it. If Young generation is growing and shrinking rapidly, your object allocation rate is high — look for short-lived collections created in hot paths.
🟡Database connection pool exhaustion — requests queue and eventually timeout under load
Immediate ActionCheck active versus maximum connections and pending connection requests in real time via Actuator
Commands
curl http://localhost:8080/actuator/metrics/hikaricp.connections.active
curl http://localhost:8080/actuator/metrics/hikaricp.connections.pending
Fix NowIf active equals maximum-pool-size and pending is nonzero, increase spring.datasource.hikari.maximum-pool-size. Calculate the correct pool size: (core count * 2) + effective spindle count is the HikariCP recommended formula. If active never drops even between requests, you have unclosed connections or long-running transactions holding connections — check for @Transactional methods that do non-database work while holding a connection.
Production IncidentThe Singleton That Stole a Prototype — A Payment Audit Compliance FailureA payment audit service injected a prototype-scoped audit token bean into a singleton service. Every transaction in production shared the same audit token. Compliance auditors flagged the system as non-compliant because unique transaction tracking was impossible.
SymptomCompliance auditors discovered that every payment transaction in a rolling 30-day period shared the same audit token ID. Transaction correlation was completely broken — when investigating a disputed charge, the audit trail pointed to dozens of unrelated transactions instead of the specific one. The system appeared to process all transactions under a single identity. Support tickets for disputed charges could not be investigated because the audit log was effectively useless.
AssumptionThe team's first assumption was a bug in the UUID generation logic inside ForgeAuditToken — that it was always returning the same value due to a seeding problem or a static state issue in the random number generator. They spent two days adding entropy validation, switching UUID implementations, and adding logging to the token generator. Every log line showed the token generator producing different values when called directly in tests. The token appeared correct in isolation — the bug was invisible without understanding Spring's scope mechanics.
Root causeForgeAuditToken was annotated with @Scope("prototype") and injected via a standard @Autowired field into ForgeOrderService, which was a singleton. Spring injects field dependencies exactly once — at singleton creation time during application startup. The singleton captured the first and only ForgeAuditToken instance created during its initialization and held a reference to that same object for the entire lifetime of the application. The prototype scope declaration was ignored in practice because the bean was never retrieved from the Spring context after that initial injection. Every payment transaction used the same ForgeAuditToken object with the same timestamp and the same generated token value.
FixReplaced the @Autowired ForgeAuditToken field with ObjectProvider<ForgeAuditToken> and updated the payment processing logic to call tokenProvider.getObject() at the start of each transaction. ObjectProvider retrieves a fresh bean from the Spring context on each call, correctly honoring the prototype scope. A unit test was added that calls tokenProvider.getObject() twice and asserts the two returned instances are different objects with different timestamps. An ArchUnit rule was added to the CI pipeline that fails the build if any prototype-scoped bean is found injected directly into a singleton-scoped bean — preventing the same class of bug from reaching production again.
Key Lesson
Never inject a prototype-scoped bean directly into a singleton — Spring resolves field and constructor dependencies once at singleton creation time, effectively converting your prototype into a singleton for the lifetime of the applicationUse ObjectProvider<YourPrototypeBean> and call provider.getObject() each time you need a fresh instance — this is the correct production pattern and works correctly regardless of the consumer's scopeWrite a test that asserts two consecutive provider.getObject() calls return different instances — this is a five-line test that would have caught this exact bug before it reached productionField injection actively hid this bug — a constructor that accepted ObjectProvider<ForgeAuditToken> would have made the dependency lifecycle explicit and prompted review during code inspectionAdd static analysis rules (ArchUnit) for scope-related constraints — bugs of this class are invisible at runtime until they cause compliance or data integrity failures
Production Debug GuideWhen Spring Boot behaves unexpectedly in production, here is the diagnostic sequence. These are ordered by frequency — the first three account for about 70% of the issues I have seen across teams.
Auto-configuration class did not fire — expected bean is missing from the contextRun the application with --debug flag or set logging.level.org.springframework.boot.autoconfigure=DEBUG in application.properties. Check the Conditions Evaluation Report for the specific class under 'Negative matches' — look for 'did not match' reasons. Common causes: required class not on classpath (@ConditionalOnClass failed), bean already defined by user configuration (@ConditionalOnMissingBean triggered), or required property not set (@ConditionalOnProperty failed). The report tells you the exact reason in plain text — no guessing required.
Prototype-scoped bean behaves like a singleton — same instance returned every timeCheck if the prototype bean is injected directly into a singleton via @Autowired field injection or constructor parameter. If the singleton receives the prototype once at startup and holds a reference, every subsequent use of that reference returns the same object. Replace the direct injection with ObjectProvider<PrototypeBean> and call provider.getObject() at the point where you need a fresh instance. Write a test that calls getObject() twice and asserts different instances — verify the fix actually works before closing the ticket.
Application fails to start with 'Port already in use' or 'Address already in use'Find the process holding the port before changing anything: lsof -i :8080 on Mac/Linux, netstat -ano | findstr :8080 on Windows. Determine if it is another instance of your application (common in development when a previous run did not terminate cleanly) or a different service. Either kill the conflicting process or change server.port in application.properties. In Docker or Kubernetes, check for port mapping conflicts and verify no other pod is already bound to the same node port.
Application runs out of memory in production — OOMKill in Kubernetes or OutOfMemoryError in logsCapture a heap dump while the application is running at high memory before it dies: jcmd <PID> GC.heap_dump /tmp/heapdump.hprof. Analyze with Eclipse MAT — look for the dominator tree to find which objects are retaining the most memory. Common causes: large collections growing without eviction (caches with no TTL), unclosed streams held in static collections, or Hibernate's first-level cache accumulating during long-running batch transactions. Set -Xmx explicitly in the Dockerfile or Kubernetes resource limits rather than relying on JVM defaults, and add -XX:+HeapDumpOnOutOfMemoryError so you get a dump automatically on the next OOM.
Database connection pool exhaustion — requests queue and eventually timeoutCheck the active connection count against the pool maximum via Actuator: GET /actuator/metrics/hikaricp.connections.active and /actuator/metrics/hikaricp.connections.pending. If active equals maximum-pool-size and pending is climbing, you have one of two problems: pool is too small for your concurrency level, or slow queries are holding connections too long. Check for queries without indexes, N+1 query patterns, or long-running transactions that hold a connection while doing non-database work. Add spring.datasource.hikari.connection-timeout=20000 to fail fast rather than queuing indefinitely — timeouts generate alertable errors, queue buildup does not.
Spring Boot application startup takes over 30 seconds where 5 seconds is expectedProfile startup with --debug to identify which auto-configuration classes are being evaluated and how many are negative matches. Each evaluation has a cost. Check how many starters are in pom.xml versus how many you actually use — an application with spring-boot-starter-data-mongodb, spring-boot-starter-amqp, and spring-boot-starter-data-redis in a project that uses none of them is evaluating hundreds of irrelevant configurations. Remove unused starters first. Then consider spring.main.lazy-initialization=true to defer bean creation until first use — effective for development environments where startup time matters more than first-request latency.

Spring Boot has become the default way Java teams build microservices, REST APIs, and enterprise applications. Nearly every Java backend role posted today lists it as a requirement, which means it dominates the technical interview circuit at every seniority level.

Candidates who memorize annotations and definitions collapse under follow-up questions within the first two minutes. Senior interviewers are not testing whether you know what @SpringBootApplication does — they assume you do. They are probing the mechanism: how auto-configuration actually decides what to wire, why constructor injection matters beyond style preference, what happens when you inject a prototype bean into a singleton, and how to debug a missing bean in production without guessing.

I have conducted dozens of Spring Boot technical interviews and reviewed hundreds of candidates. The pattern is consistent: candidates who can explain the conditional assembly model, demonstrate they have read a Conditions Evaluation Report, and describe a real production failure involving bean scope or auto-configuration get offers. Candidates who recite definitions do not.

This guide covers the questions senior interviewers actually ask — with the mechanism behind each answer, production failures that illustrate the concepts, and code examples that demonstrate understanding rather than memorization.

What Is Spring Boot Auto-Configuration and How Does It Actually Work?

Auto-configuration is the heart of Spring Boot and the most misunderstood concept in interviews. Candidates say 'Spring Boot configures itself automatically' — but that is like saying a plane 'flies itself.' True in a superficial sense, but it does not explain the mechanism, and the mechanism is what gets you hired.

When your application starts, Spring Boot scans a file called spring.factories — located inside the spring-boot-autoconfigure JAR — for the key org.springframework.boot.autoconfigure.EnableAutoConfiguration. In Spring Boot 3.x, this moved to META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports. Either way, the file lists hundreds of candidate configuration classes. Each one is annotated with @ConditionalOn guards that function as evaluation criteria: only activate me if specific conditions are true at startup time.

For example, DataSourceAutoConfiguration carries @ConditionalOnClass({DataSource.class, EmbeddedDatabaseType.class}) and @ConditionalOnMissingBean(DataSource.class). If a JDBC driver class is on the classpath and you have not defined your own DataSource bean, both conditions pass and Spring Boot creates a connection pool for you. If you define your own DataSource bean, @ConditionalOnMissingBean fails and the auto-configuration steps aside entirely — your explicit definition wins, no conflict.

This conditional-first design is the insight that separates understanding Spring Boot from just using it. Auto-configuration never overrides what you explicitly define. It fills gaps. The entire model is 'provide sensible defaults that vanish when the user makes a different choice.' Every starter dependency you add to pom.xml pulls in auto-configuration classes with their own conditional guards. Your classpath is the primary configuration signal.

The follow-up question in every senior interview is: 'How would you debug why a particular auto-configuration is not firing?' The answer is the Conditions Evaluation Report — run with --debug or set logging.level.org.springframework.boot.autoconfigure=DEBUG. The report shows every auto-configuration class evaluated at startup, grouped into Positive matches (fired), Negative matches (skipped and why), and Unconditional classes (always run). It shows the exact @Conditional annotation that failed and what value it tested. This report makes auto-configuration completely transparent — there is no magic, only conditions.

io/thecodeforge/autoconfigdemo/AutoConfigurationDemo.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
package io.thecodeforge.autoconfigdemo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.jdbc.core.JdbcTemplate;

/**
 * io.thecodeforge: Demonstrating auto-configuration in action.
 *
 * What triggers JdbcTemplate auto-configuration here:
 *   1. H2 is on the classpath (pom.xml: h2, scope=runtime)
 *      -> @ConditionalOnClass(DataSource.class) passes
 *   2. No DataSource bean is defined by the user
 *      -> @ConditionalOnMissingBean(DataSource.class) passes
 *   3. Both conditions pass -> DataSourceAutoConfiguration fires
 *   4. JdbcTemplateAutoConfiguration detects DataSource bean exists
 *      -> Creates JdbcTemplate bean automatically
 *
 * Run with: java -jar app.jar --debug
 * Look for DataSourceAutoConfiguration in "Positive matches" section
 */
@SpringBootApplication
public class AutoConfigurationDemo {

    public static void main(String[] args) {
        ConfigurableApplicationContext context =
            SpringApplication.run(AutoConfigurationDemo.class, args);

        // This bean was auto-configured — we never wrote a DataSource or JdbcTemplate bean
        JdbcTemplate jdbcTemplate = context.getBean(JdbcTemplate.class);

        jdbcTemplate.execute("CREATE TABLE product (id INT, name VARCHAR(50))");
        jdbcTemplate.update("INSERT INTO product VALUES (1, 'Forge Wireless Keyboard')");

        String productName = jdbcTemplate.queryForObject(
            "SELECT name FROM product WHERE id = ?",
            String.class,
            1
        );

        System.out.println("Auto-configured DB result: " + productName);

        // Now demonstrate overriding: if we had defined a DataSource bean ourselves,
        // DataSourceAutoConfiguration would have skipped — our bean wins
        context.close();
    }
}
▶ Output
// Conditions Evaluation Report excerpt (--debug output):
//
// Positive matches:
// DataSourceAutoConfiguration matched:
// - @ConditionalOnClass found required classes 'javax.sql.DataSource', 'org.h2.Driver'
// - @ConditionalOnMissingBean (types: javax.sql.DataSource) did not find any beans
//
// JdbcTemplateAutoConfiguration matched:
// - @ConditionalOnClass found required class 'org.springframework.jdbc.core.JdbcTemplate'
// - @ConditionalOnSingleCandidate (types: javax.sql.DataSource) found a primary candidate
//
// Negative matches:
// MongoAutoConfiguration:
// Did not match:
// - @ConditionalOnClass did not find required class 'com.mongodb.client.MongoClient'
//
// Application output:
// Auto-configured DB result: Forge Wireless Keyboard
Mental Model
Auto-Configuration Is Conditional Assembly, Not Magic
Spring Boot does not configure everything — it evaluates hundreds of @ConditionalOn guards and only activates what your classpath and explicit bean definitions permit. Every decision is recorded in the Conditions Evaluation Report.
  • @ConditionalOnClass fires only if a specific class is on the classpath — no JDBC driver class means no DataSource auto-configuration, no error, no warning
  • @ConditionalOnMissingBean fires only if you have not already defined a bean of that type — your explicit bean always wins, the auto-configured one steps aside
  • @ConditionalOnProperty fires only if a specific property is set to a specific value — use this to toggle features on and off via application.properties
  • The --debug flag prints the full Conditions Evaluation Report — every auto-configuration class with the exact condition that passed or failed, in plain English
  • Spring Boot 3.x uses AutoConfiguration.imports instead of spring.factories — same conditional mechanism, different discovery file location
  • The order of auto-configuration evaluation is controlled by @AutoConfigureBefore and @AutoConfigureAfter — relevant when your custom auto-configuration depends on another
📊 Production Insight
A team added spring-boot-starter-data-jpa to a service that did not yet have a database. The starter pulled in HikariCP as a transitive dependency. DataSourceAutoConfiguration saw HikariCP on the classpath, found no DataSource bean defined, and attempted to create a connection pool to the default localhost:5432. The application crashed at startup with a connection refused error to a database that did not exist. The team spent 45 minutes investigating network configuration before running --debug and seeing DataSourceAutoConfiguration in the Positive matches list. Adding spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration suppressed it immediately. The rule they added to their onboarding documentation: run --debug at least once during the first startup of any new service to see exactly what auto-configuration has activated.
🎯 Key Takeaway
Auto-configuration uses @ConditionalOnClass, @ConditionalOnMissingBean, and @ConditionalOnProperty guards — it is conditional assembly, not magic. Every decision has a traceable reason.
Your explicit bean definitions always win — auto-configuration fills gaps, never overrides explicit configuration.
Run with --debug to read the Conditions Evaluation Report — this is the single most powerful debugging tool for understanding Spring Boot behavior and it is the answer senior interviewers want to hear.
Debugging Auto-Configuration Failures
IfExpected bean is missing from the context — injection fails with NoSuchBeanDefinitionException
UseRun with --debug and find the auto-configuration class in Negative matches — the report shows the exact condition that failed and what it evaluated
IfAuto-configuration fires but uses wrong settings — wrong database URL, wrong pool size
UseOverride specific properties in application.properties (spring.datasource.url, spring.datasource.hikari.maximum-pool-size) — you rarely need to redefine the entire bean
IfAuto-configuration conflicts with your custom configuration — NoUniqueBeanDefinitionException
UseAdd @Primary to your custom bean for disambiguation, or exclude the conflicting auto-configuration with spring.autoconfigure.exclude in application.properties
IfApplication starts slowly with many auto-configurations being evaluated unnecessarily
UseRemove unused starters from pom.xml — each starter pulls in auto-configuration classes that must be evaluated at startup even if all conditions fail
IfCustom auto-configuration in a library JAR is not being discovered
UseVerify the class is registered in META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports (Spring Boot 3.x) or spring.factories (Spring Boot 2.x) — without registration, Spring Boot never finds it

Spring Boot Beans, Scopes, and Dependency Injection — The Questions That Trip People Up

Dependency Injection is the backbone of every Spring Boot application, and interviewers probe it specifically because surface-level knowledge collapses fast under follow-up questions. 'What is a Spring bean?' is the easy question. 'What happens when you inject a prototype bean into a singleton?' is the question that separates candidates.

A bean is an object whose complete lifecycle — creation, dependency resolution, initialization, and destruction — is managed by the Spring IoC container. You declare a bean with @Component, @Service, @Repository, @Controller, or @Bean inside a @Configuration class. The 'inversion' in Inversion of Control is that your code no longer instantiates objects with new — the container does, and hands them to you fully assembled.

The three injection styles matter for reasons beyond style. Constructor injection makes dependencies explicit and mandatory — you cannot create the object without providing all its dependencies, which means the compiler enforces completeness. Fields annotated as final with constructor injection are immutable for the object's lifetime. Unit tests can instantiate the class directly with mock objects passed to the constructor — no Spring context, no annotation processing, no test startup time. Field injection (@Autowired on a field) looks clean but hides dependencies from callers, prevents immutability, and forces tests to use reflection or a full Spring context to populate private fields. In a codebase I reviewed at a mid-size company, the test suite took 45 minutes because every test class needed a full Spring context due to field injection across 200+ service classes. Refactoring to constructor injection over two sprints cut test time to under 5 minutes.

Bean scope is where interviews get interesting. Singleton is the default — one shared instance per Spring context, created at startup, shared across all threads simultaneously. Prototype means a fresh instance every time the bean is requested from the context — but this only works if you retrieve it from the context each time. The classic production bug: annotate a bean with @Scope("prototype") and inject it into a singleton via @Autowired. Spring resolves the dependency once during singleton creation and stores the reference. Every subsequent use of that field returns the same prototype instance — scope declared, scope ignored. The fix is injecting ObjectProvider<T> and calling provider.getObject() at runtime, which retrieves a fresh bean from the context on each call.

io/thecodeforge/dipatterns/ForgeOrderService.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
package io.thecodeforge.dipatterns;

import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Component;
import org.springframework.stereotype.Service;

/**
 * io.thecodeforge: Demonstrating correct prototype bean usage in a singleton.
 *
 * ForgeAuditToken is prototype-scoped — each payment transaction
 * must receive a fresh instance with its own unique timestamp.
 *
 * WRONG pattern (what the production incident used):
 *   @Autowired
 *   private ForgeAuditToken auditToken;  // Captured once at startup — always same instance
 *
 * CORRECT pattern (what ObjectProvider enables):
 *   Inject ObjectProvider<ForgeAuditToken> and call getObject() per transaction
 */
@Component
@Scope("prototype")  // Fresh instance every time getObject() is called
class ForgeAuditToken {

    private final String tokenId;
    private final long createdAt;

    public ForgeAuditToken() {
        // Each instance gets its own UUID — only works if a new instance is created
        this.tokenId = java.util.UUID.randomUUID().toString();
        this.createdAt = System.nanoTime();
    }

    public String getTokenId() { return tokenId; }
    public long getCreatedAt() { return createdAt; }
}

@Service
public class ForgeOrderService {

    // ObjectProvider<T>: the correct way to consume a prototype bean from a singleton
    // Spring injects the provider once (fine — providers are stateless)
    // provider.getObject() retrieves a fresh ForgeAuditToken from the context each time
    private final ObjectProvider<ForgeAuditToken> auditTokenProvider;

    // Constructor injection: dependency is explicit, mandatory, final, and testable
    public ForgeOrderService(ObjectProvider<ForgeAuditToken> auditTokenProvider) {
        this.auditTokenProvider = auditTokenProvider;
    }

    public void processPayment(String orderId) {
        // Each call creates a fresh ForgeAuditToken — this is the correct behavior
        ForgeAuditToken token = auditTokenProvider.getObject();
        System.out.printf("Order %s: auditToken=%s, createdAt=%d%n",
            orderId, token.getTokenId(), token.getCreatedAt());
    }

    // Test helper — demonstrates that two calls produce distinct instances
    public boolean verifyPrototypeBehavior() {
        ForgeAuditToken first = auditTokenProvider.getObject();
        ForgeAuditToken second = auditTokenProvider.getObject();
        // True if prototype scope is working correctly — different objects, different token IDs
        return first != second && !first.getTokenId().equals(second.getTokenId());
    }
}
▶ Output
// Two consecutive processPayment calls produce distinct audit tokens:
// Order ORD-001: auditToken=f47ac10b-58cc-4372-a567-0e02b2c3d479, createdAt=1718000001001
// Order ORD-002: auditToken=3f2504e0-4f89-11d3-9a0c-0305e82c3301, createdAt=1718000001892
//
// verifyPrototypeBehavior() returns: true
//
// WRONG pattern output (direct @Autowired injection):
// Order ORD-001: auditToken=f47ac10b-58cc-4372-a567-0e02b2c3d479, createdAt=1718000000001
// Order ORD-002: auditToken=f47ac10b-58cc-4372-a567-0e02b2c3d479, createdAt=1718000000001
// Same token ID and timestamp — prototype scope effectively ignored
⚠ The Prototype-in-Singleton Trap Is Silent and Dangerous
Injecting a prototype-scoped bean directly into a singleton via @Autowired field injection or constructor parameter does not throw an error at startup or at runtime. Spring resolves the dependency once, stores the reference in the singleton, and considers the injection complete. Every subsequent use of that field returns the original instance — the prototype scope is silently honored only for that first creation. The application runs correctly in functional tests (which typically test single operations) and fails in production under concurrent load or extended operation where the stateful nature of the shared instance becomes observable. ObjectProvider<T> is the correct fix — it defers bean retrieval to call time rather than injection time.
📊 Production Insight
A team refactored a large codebase from field injection to constructor injection over two sprints. The primary motivation was test speed — unit tests required a full Spring context with field injection because there was no way to inject mocks into private @Autowired fields without reflection hacks. After the refactor, service classes could be instantiated in tests with new ServiceClass(mockDependency1, mockDependency2) — no Spring context, no @SpringBootTest, no startup overhead. The test suite dropped from 47 minutes to 4 minutes. The secondary benefit was that several hidden circular dependencies became compiler errors rather than runtime failures — constructor injection makes circular dependencies structurally impossible, whereas field injection allows Spring to work around them with CGLIB proxies in ways that are difficult to debug.
🎯 Key Takeaway
Constructor injection is the right default — it makes dependencies explicit, enables immutability with final fields, eliminates the need for a Spring context in unit tests, and makes circular dependencies structurally impossible at compile time.
Never inject a prototype bean directly into a singleton — Spring resolves field and constructor dependencies once at singleton creation time, silently converting your prototype into a singleton for the application's lifetime.
Use ObjectProvider<T> and call provider.getObject() each time you need a fresh prototype instance — this is the production-correct pattern, and it makes the scope behavior explicit to anyone reading the code.
Choosing the Right Injection Style and Bean Scope
IfMandatory dependency that the class cannot function without
UseUse constructor injection with a final field — dependency is explicit, immutable, and the class is directly testable without a Spring context
IfOptional dependency that may or may not be provided depending on the deployment environment
UseUse constructor injection with Optional<T> — the Optional is empty if the bean does not exist, no null checks needed
IfNeed a fresh instance of a stateful bean each time a specific operation is performed
UseUse @Scope("prototype") on the bean and inject ObjectProvider<T> into the singleton consumer — call provider.getObject() at operation time, not at injection time
IfNeed a bean scoped to a single HTTP request — for example, a request-correlation-ID holder
UseUse @Scope(value = "request", proxyMode = ScopedProxyMode.TARGET_CLASS) — Spring creates a new instance per incoming HTTP request and injects a proxy into the singleton
IfNeed a bean scoped to a user session — for example, a shopping cart or user preference store
UseUse @Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS) — Spring creates a new instance per authenticated user session with session-aware proxy injection
IfPossible circular dependency between two beans
UseRefactor to break the cycle — extract a shared interface or a third class that both depend on. If truly unavoidable, use setter injection on one side (not constructor) and document why explicitly

@ConfigurationProperties vs @Value — Why This Matters Beyond the Interview Room

Every Spring Boot application needs external configuration — database URLs, API keys, timeouts, feature flags. How you bind that configuration determines whether your application fails loudly at startup when configuration is missing, fails silently at runtime, or catches misconfiguration before a single request is processed.

@Value injects a single property value with a SpEL expression: @Value("${forge.payment.gateway.url}"). It works for isolated, one-off properties. It falls apart when you have multiple related properties. Renaming forge.payment.gateway.url to forge.payment.url in application.properties produces no compile-time error — the @Value annotation still references the old name and Spring will inject an empty string or the literal SpEL expression if the property is missing, depending on whether you provide a default. At runtime, this typically manifests as an HTTP call to a malformed URL, a connection timeout, or a NullPointerException several layers deep — none of which obviously points to a misconfigured property name.

@ConfigurationProperties binds a prefix of properties to a typed Java class. All properties under forge.payment are bound to fields on ForgePaymentProperties, with type conversion handled automatically. The class can be annotated with @Validated and carry JSR-303 constraints: @NotBlank on the URL field, @Min(1000) on the timeout field. If a required property is missing or malformed, the application fails to start with an explicit error message pointing to the exact property name. This is the fail-fast principle applied to configuration — you find out on the first startup in a new environment, not when the first payment is processed at 2 AM.

The IDE integration is the other practical advantage. With spring-boot-configuration-processor on the compile classpath, @ConfigurationProperties classes generate metadata that powers IDE autocomplete for application.properties and application.yml. Developers see all available properties with their types and descriptions as they type. @Value provides none of this — every property name is a string literal that the IDE cannot verify.

io/thecodeforge/config/ForgePaymentProperties.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475
package io.thecodeforge.config;

import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotBlank;
import jakarta.validation.constraints.NotNull;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.validation.annotation.Validated;

/**
 * io.thecodeforge: Type-safe configuration binding with startup validation.
 *
 * Add to pom.xml for IDE autocomplete:
 *   <dependency>
 *     <groupId>org.springframework.boot</groupId>
 *     <artifactId>spring-boot-configuration-processor</artifactId>
 *     <optional>true</optional>
 *   </dependency>
 *
 * All fields validated at startup — application fails immediately if
 * configuration is missing or malformed, not at first use.
 */
@ConfigurationProperties(prefix = "forge.payment")
@Validated
public class ForgePaymentProperties {

    @NotBlank(message = "forge.payment.gateway-url must not be blank")
    private String gatewayUrl;

    @NotBlank(message = "forge.payment.api-key must not be blank")
    private String apiKey;

    @NotNull
    @Min(value = 1000, message = "forge.payment.timeout-ms must be at least 1000ms")
    private Integer timeoutMs;

    @NotNull
    @Min(value = 1, message = "forge.payment.max-retries must be at least 1")
    private Integer maxRetries = 3;  // Default value — used if property is absent

    // Standard getters and setters omitted for brevity
    // In practice, use @Getter from Lombok or generate with IDE
    public String getGatewayUrl() { return gatewayUrl; }
    public void setGatewayUrl(String gatewayUrl) { this.gatewayUrl = gatewayUrl; }
    public String getApiKey() { return apiKey; }
    public void setApiKey(String apiKey) { this.apiKey = apiKey; }
    public Integer getTimeoutMs() { return timeoutMs; }
    public void setTimeoutMs(Integer timeoutMs) { this.timeoutMs = timeoutMs; }
    public Integer getMaxRetries() { return maxRetries; }
    public void setMaxRetries(Integer maxRetries) { this.maxRetries = maxRetries; }
}

// Enable scanning of @ConfigurationProperties classes:
// Add @EnableConfigurationProperties(ForgePaymentProperties.class) to a @Configuration class,
// or annotate ForgePaymentProperties itself with @Component.

// application.properties:
// forge.payment.gateway-url=https://api.forgepay.io/v2
// forge.payment.api-key=${FORGE_PAYMENT_API_KEY}   <-- resolved from environment at runtime
// forge.payment.timeout-ms=5000
// forge.payment.max-retries=3

// Usage in a service:
// @Service
// public class ForgePaymentService {
//     private final ForgePaymentProperties config;
//
//     public ForgePaymentService(ForgePaymentProperties config) {
//         this.config = config;
//     }
//
//     public void processPayment(PaymentRequest request) {
//         // No null checks, no string parsing, no silent misconfiguration
//         String url = config.getGatewayUrl(); // Type-safe, validated at startup
//     }
// }
▶ Output
// Startup with missing forge.payment.gateway-url:
//
// APPLICATION FAILED TO START
//
// Description:
// Binding to target org.springframework.boot.context.properties.bind.BindException:
// Failed to bind properties under 'forge.payment' to
// io.thecodeforge.config.ForgePaymentProperties
//
// Reason: forge.payment.gateway-url must not be blank
//
// Action:
// Update your application's configuration. The following
// properties are missing or invalid:
// forge.payment.gateway-url (reason: must not be blank)
//
// Versus @Value behavior with missing property:
// No startup error. Application starts successfully.
// First call to config.getGatewayUrl() returns null or the literal '${forge.payment.gateway-url}'
// NullPointerException thrown on first payment request — production impact.
💡Never Store Secrets in application.properties — Use Environment Variable Placeholders
The ${FORGE_PAYMENT_API_KEY} syntax in application.properties resolves the value from the environment at runtime — the actual secret never appears in the properties file, the Git repository, or the Docker image. The properties file contains only the placeholder. The secret lives only in the environment (Kubernetes Secret, AWS Secrets Manager, HashiCorp Vault) and is injected at container startup. This is not optional for production deployments. A secret committed to a Git repository — even a private one — is considered compromised. It survives in Git history even after deletion. The placeholder pattern ensures that developers can configure their local environment with their own test credentials without those credentials ever touching version control.
📊 Production Insight
A team had 34 @Value annotations scattered across 12 service classes in a payment processing application. When the payment gateway changed its API versioning structure — from forge.payment.gateway.url to forge.payment.url — the team did a search-and-replace across the codebase and missed three @Value annotations in less-visited service classes. The application compiled cleanly, deployed successfully, and the three affected services silently used null as the gateway URL. The failures surfaced as timeout exceptions three hours after deployment during a batch reconciliation job. Migrating to @ConfigurationProperties with @Validated would have surfaced all three misses as startup failures on the first deployment to any environment, eliminating the three-hour production window entirely.
🎯 Key Takeaway
@ConfigurationProperties with @Validated fails fast at startup for missing or malformed configuration — you find out immediately in any environment, not at the moment the misconfigured code path is first exercised.
@Value is appropriate for isolated, one-off properties in small applications — it is not appropriate for groups of related configuration values where a missed rename causes a silent runtime failure.
Store secret values in environment variables referenced via ${ENV_VAR_NAME} placeholders — the actual secret never touches the properties file, the repository, or the image.

Production-Grade Dockerization for Spring Boot — What Interviewers Actually Probe

In a senior interview, you are not just asked about code — you are asked about how that code reaches production reliably and securely. Modern Spring Boot applications are almost exclusively deployed via Docker and Kubernetes, and the quality of your Dockerfile is a direct signal of production experience.

Single-stage Dockerfiles — a common starting point — use a JDK image to both compile and run the application. The problem is that the JDK is 400-500MB larger than the JRE, includes compiler tools, diagnostic utilities, and development libraries that a running application never needs, and exposes a significantly larger attack surface. Every unnecessary binary in a production image is a potential vulnerability that your security scanner will flag and your compliance team will question.

Multi-stage builds solve this. Stage 1 uses a JDK image with Maven to compile the application — this stage is heavyweight but temporary. Stage 2 starts fresh from a JRE-only image, copies only the built JAR, and becomes the actual production image. The build stage is discarded entirely — none of its tools, caches, or intermediate files appear in the final image.

Running as root is the other issue interviewers probe. The default behavior without a USER directive is to run as root (UID 0) inside the container. If the application has a vulnerability that allows command execution — a deserialization exploit, a path traversal in a file upload endpoint — the attacker operates with root privileges inside the container. A dedicated non-root user confines any exploit to low-privilege file system access.

Layer caching is the build performance dimension. If you COPY the entire source tree before running the dependency download, every code change — including a one-line fix — invalidates the Maven dependency cache layer and forces a full re-download. Copying pom.xml first and running dependency resolution before copying source means dependency downloads are only re-triggered when pom.xml changes, which is far less frequent than code changes.

Dockerfile · DOCKERFILE
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
# ── Stage 1: Build ────────────────────────────────────────────────────────────
# eclipse-temurin is the preferred base: Adoptium community-maintained, well-scanned
# Using Java 21 LTS — the current long-term support version as of 2026
FROM eclipse-temurin:21-jdk-jammy AS build
WORKDIR /app

# Copy dependency manifest first — layer cache key is pom.xml
# When only source code changes, this layer and the next are reused from cache
# Saves 60-180 seconds on every code-only rebuild
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline -q

# Source code changes here but dependency cache above is preserved
COPY src ./src
RUN ./mvnw clean package -DskipTests -q

# ── Stage 2: Production Runtime ───────────────────────────────────────────────
# JRE only — no compiler, no javac, no Maven, no source code in production image
FROM eclipse-temurin:21-jre-jammy
WORKDIR /app

# Security: dedicated non-root system user
# -r: system account (no home directory, no login shell)
# If application is compromised, attacker operates as low-privilege 'spring' user
RUN groupadd -r springgroup && useradd -r -g springgroup -s /sbin/nologin springuser

# Create directories the app needs before switching to non-root user
RUN mkdir -p /app/logs /app/tmp && chown -R springuser:springgroup /app

# Copy only the built artifact — nothing from Stage 1 comes through except this file
COPY --chown=springuser:springgroup --from=build /app/target/*.jar app.jar

# Switch to non-root before ENTRYPOINT — all subsequent operations run as springuser
USER springuser

# JVM flags for container environments:
#   UseContainerSupport: read memory limits from cgroups, not /proc/meminfo (host RAM)
#   MaxRAMPercentage:    allocate 75% of container memory as heap
#   ExitOnOutOfMemoryError: fail loudly instead of degrading silently under memory pressure
ENTRYPOINT ["java", \
  "-XX:+UseContainerSupport", \
  "-XX:MaxRAMPercentage=75.0", \
  "-XX:+ExitOnOutOfMemoryError", \
  "-Dfile.encoding=UTF-8", \
  "-jar", "app.jar"]
▶ Output
# Build: docker build -t io.thecodeforge/forge-api:1.0.0 .
#
# Image size comparison:
# Single-stage JDK image: 834MB
# Multi-stage JRE image: 248MB (70% reduction)
#
# Security scan results (Trivy):
# Single-stage JDK: 43 CVEs (12 HIGH, 3 CRITICAL)
# Multi-stage JRE: 7 CVEs (1 HIGH, 0 CRITICAL)
#
# Build time comparison (warm cache, code-only change):
# Without layer ordering: 3m 40s (re-downloads all dependencies)
# With layer ordering: 0m 28s (dependency layer cached)
#
# Verify non-root user:
# docker run --rm io.thecodeforge/forge-api:1.0.0 whoami
# springuser
#
# Verify build tools absent from production image:
# docker run --rm io.thecodeforge/forge-api:1.0.0 sh -c 'which mvn || echo absent'
# absent
⚠ UseContainerSupport Is Critical — Ignoring It Causes Silent OOM Kills
Before Java 10, the JVM read /proc/meminfo to determine available memory and sized the heap based on the host machine's total RAM. In a 512MB container on a 64GB host, the JVM would allocate approximately 16GB of heap. The container's OOM killer would then terminate the JVM with exit code 137 — no Java exception, no stack trace, just the process disappearing. Java 11+ enables -XX:+UseContainerSupport by default, which reads memory limits from the cgroup filesystem instead of /proc/meminfo. Adding it explicitly in the ENTRYPOINT makes the intention clear and ensures correct behavior regardless of minor JVM version differences. Pair it with -XX:MaxRAMPercentage=75.0 to leave headroom for metaspace, code cache, and thread stacks — the non-heap memory the JVM needs beyond the configured heap.
📊 Production Insight
A team switched their base image from eclipse-temurin:17 (full JDK, 650MB) to eclipse-temurin:17-jre-jammy (JRE only, 210MB) as part of a security hardening sprint. The image size reduction was the expected benefit. The unexpected benefit was Kubernetes pod startup time: pulling the smaller image from ECR in a new availability zone dropped from 25 seconds to 8 seconds. During an incident where they needed to scale from 3 pods to 15 pods in under 2 minutes, this 17-second difference per pod was the difference between recovering within their SLA and breaching it. Image size is not just a build metric — it is an incident response metric.
🎯 Key Takeaway
Multi-stage builds separate the build environment from the runtime — production image contains only JRE and your application JAR, reducing size by 70% and CVE count significantly.
Never run the application as root in production — a dedicated non-root user confines any exploit to low-privilege access and satisfies most compliance frameworks.
COPY pom.xml before COPY src to cache the dependency layer — code-only builds drop from 3+ minutes to under 30 seconds on a warm cache.
Dockerizing Spring Boot — Key Decisions
IfBuilding the application inside Docker rather than in CI before Docker
UseUse a multi-stage build — Maven/JDK in stage 1 compiles, JRE-only in stage 2 runs. Stage 1 is discarded from the final image
IfContainer is running as root — security audit finding or compliance requirement
UseAdd groupadd and useradd in the Dockerfile and switch with USER directive before ENTRYPOINT — limits exploit impact to low-privilege user
IfDocker builds re-download all Maven dependencies on every code change
UseCOPY pom.xml before COPY src and run dependency:go-offline between them — dependency layer is cached against pom.xml hash, not source code
IfJVM being OOM-killed despite apparently sufficient container memory
UseAdd -XX:+UseContainerSupport and -XX:MaxRAMPercentage=75.0 — JVM is reading host RAM instead of container limit without these flags
IfNeed faster Kubernetes pod startup for scaling during incidents
UseMinimize final image size with JRE-only base and multi-stage build — smaller images pull faster from registries, directly impacting scale-up time

The Spring Bean Lifecycle — What @PostConstruct Can Do That a Constructor Cannot

The Spring Bean Lifecycle is a standard interview topic, but the follow-up question — 'What is the difference between the constructor and @PostConstruct?' — trips more candidates than it should. Understanding the lifecycle is not just academic. It explains why initialization code in a constructor sometimes silently fails, why AOP proxies do not apply to constructor code, and why database schema validation or connection pre-warming must happen in @PostConstruct rather than in the constructor.

The lifecycle follows a strict sequence. Spring instantiates the bean via its constructor — at this point, only the arguments passed to the constructor are available. Spring has not yet injected field-level @Autowired dependencies (if any exist), has not applied @Value substitutions, and has not applied any BeanPostProcessor transformations including AOP proxy wrapping. If you put initialization logic in the constructor that uses an injected dependency, that dependency is null if it was injected via field, and it exists but the AOP proxy has not been applied yet if it was injected via constructor.

After construction, Spring resolves and injects all remaining dependencies. After injection, Spring calls any BeanPostProcessor before-hooks, which includes applying AOP proxies and transactional proxies. After that, Spring calls @PostConstruct methods. At this point, all dependencies are injected, all proxies are applied, all property values are bound, and the bean is fully assembled. This is the earliest point at which it is safe to perform initialization that depends on injected beans, AOP behavior, or property values.

For databases, this distinction is operationally significant. Calling a repository method in a constructor to pre-warm caches may fail because the transaction infrastructure (a BeanPostProcessor) has not been applied yet. The same call in @PostConstruct succeeds because @Transactional proxies are in place. @PreDestroy mirrors this — it runs before Spring destroys the bean and before dependencies are removed, making it the correct place for cleanup logic like closing custom connections or flushing write buffers.

io/thecodeforge/lifecycle/ForgeCacheWarmer.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
package io.thecodeforge.lifecycle;

import jakarta.annotation.PostConstruct;
import jakarta.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;

/**
 * io.thecodeforge: Demonstrating @PostConstruct and @PreDestroy lifecycle hooks.
 *
 * Why this pattern matters:
 *   Constructor: dependencies exist but AOP proxies and @Value bindings are NOT applied yet
 *   @PostConstruct: everything is fully assembled — safe to call transactional methods,
 *                   read @Value-bound properties, use any injected dependency
 *   @PreDestroy: called before the bean is destroyed — safe for cleanup logic
 */
@Component
public class ForgeCacheWarmer {

    private static final Logger log = LoggerFactory.getLogger(ForgeCacheWarmer.class);

    private final ForgeProductRepository productRepository;
    private final ForgeProductCache productCache;

    // Constructor: dependencies are injected, but @Transactional proxy on
    // productRepository is NOT yet applied. Calling productRepository.findAll()
    // here would bypass transaction management.
    public ForgeCacheWarmer(
            ForgeProductRepository productRepository,
            ForgeProductCache productCache) {
        this.productRepository = productRepository;
        this.productCache = productCache;
        log.info("ForgeCacheWarmer constructor: dependencies injected, proxies not yet applied");
    }

    // @PostConstruct: AOP proxies applied, @Value bindings resolved, context fully assembled
    // Safe to call transactional methods, safe to read all properties
    @PostConstruct
    public void warmCache() {
        log.info("@PostConstruct: pre-warming product cache from database");
        // productRepository.findAllActive() is @Transactional — works correctly here
        // because the transactional proxy was applied before @PostConstruct was called
        productRepository.findAllActive()
            .forEach(product -> productCache.put(product.getId(), product));
        log.info("Cache warmed with {} products", productCache.size());
    }

    // @PreDestroy: called before Spring destroys this bean
    // Dependencies are still available — safe for cleanup
    @PreDestroy
    public void cleanup() {
        log.info("@PreDestroy: flushing product cache before shutdown");
        productCache.evictAll();
    }
}
▶ Output
// Application startup log:
// INFO ForgeCacheWarmer - ForgeCacheWarmer constructor: dependencies injected, proxies not yet applied
// INFO ForgeCacheWarmer - @PostConstruct: pre-warming product cache from database
// INFO ForgeCacheWarmer - Cache warmed with 2847 products
// INFO o.s.b.w.embedded.tomcat.TomcatWebServer - Tomcat started on port 8080
//
// Application shutdown log:
// INFO ForgeCacheWarmer - @PreDestroy: flushing product cache before shutdown
// INFO o.s.b.web.embedded.tomcat.TomcatWebServer - Tomcat stopped
//
// If warmCache() were in the constructor instead of @PostConstruct:
// TransactionRequiredException: No EntityManager with actual transaction available
// for current thread — calling @Transactional method before proxy is applied
💡AOP Proxy Coverage Starts After Construction — Not During It
This is the conceptual gap that causes @Transactional to 'not work' for developers who put database calls in constructors. Spring applies AOP proxies — including transaction management — during the BeanPostProcessor phase, which happens after construction but before @PostConstruct. A @Transactional method called from a constructor is called on the raw, unproxied object — no transaction is started, no rollback boundary is established. The same applies to @Cacheable, @Async, @Retry, and any other AOP-based annotation. The proxy wrapper that makes these annotations work does not exist at construction time. It exists by the time @PostConstruct runs. This is why @PostConstruct is the right place for initialization that relies on Spring's infrastructure annotations.
📊 Production Insight
A team put cache pre-warming logic in a @Repository class constructor — calling another @Transactional repository method during its own initialization. The call failed intermittently in production depending on the order Spring initialized beans. When ForgeProductRepository was initialized before its transaction manager was fully configured, the constructor call succeeded. When initialization order was different (triggered by a Spring Boot version upgrade that changed the default bean creation order), it failed with TransactionRequiredException. Moving the pre-warming logic to a @PostConstruct method in a dedicated ForgeCacheWarmer component made the initialization order irrelevant — @PostConstruct always runs after the full context is assembled.
🎯 Key Takeaway
Use @PostConstruct for initialization that depends on injected dependencies, property values, or AOP proxies — the constructor runs before proxies are applied and before all field dependencies are resolved.
@PreDestroy is the correct place for cleanup logic — it runs before the bean is destroyed and while all dependencies are still available.
Calling a @Transactional method from a constructor bypasses transaction management because the proxy has not been applied yet — this is a silent failure that is difficult to reproduce.
🗂 @Value Injection vs. @ConfigurationProperties
@ConfigurationProperties is the production choice for any group of related configuration values. @Value is appropriate only for isolated, one-off properties where no validation or grouping is needed.
Aspect@Value Injection@ConfigurationProperties
Use caseSingle, isolated property with no related siblingsLogical group of related properties — database config, payment gateway settings, feature flags
Type safetyLimited — String-to-type conversion is implicit and fails silently for complex typesFull — binds to strongly typed fields with automatic conversion and compiler support
Validation supportNone out of the box — missing properties are null or the literal SpEL expression at runtimeFull JSR-303 validation with @Validated — missing required properties cause startup failure with an explicit error message
IDE autocompleteNone — property names are unverified string literals that the IDE cannot navigate or validateFull autocomplete in application.properties and application.yml with the spring-boot-configuration-processor dependency
Refactoring safetyFragile — renaming a property in application.properties causes a silent null at runtime, not a compile errorSafe — the configuration class is a typed Java object, property name changes surface as compiler errors in the binding
TestabilityRequires a Spring context or reflection to populate @Value fields in unit testsEasily instantiated as a plain Java object in unit tests — new ForgePaymentProperties() with setters, no Spring context needed
Startup failure behaviorMissing property causes null injection or SpEL literal — failure is deferred to first use of the valueMissing required property causes immediate startup failure with a descriptive error pointing to the exact property name
Best forTruly isolated, one-off values in prototype code or small utilities — two or fewer properties with no validation requirementAny production application with logically grouped configuration — the additional structure pays off immediately in a team environment

🎯 Key Takeaways

  • Auto-configuration is conditional, not magical — it uses @ConditionalOnClass, @ConditionalOnMissingBean, and @ConditionalOnProperty guards. Your explicit bean definitions always win. The Conditions Evaluation Report (--debug) shows exactly what activated and why.
  • Constructor injection is not just a style preference — it enables immutability with final fields, makes dependencies explicit to callers, and makes unit tests trivial without starting a Spring context. Field injection actively hides dependencies and forces tests to use Spring.
  • @ConfigurationProperties over @Value for any group of related configuration — you get type-safe binding, JSR-303 validation that fails fast at startup, and IDE autocomplete. @Value produces silent runtime failures when properties are missing or renamed.
  • Never inject a prototype-scoped bean directly into a singleton — Spring resolves the dependency once at singleton creation time, silently converting your prototype to a singleton. Use ObjectProvider<T> and call getObject() at the point where you need a fresh instance.
  • Use @PostConstruct for initialization that depends on injected dependencies or AOP — constructors run before proxies are applied, so @Transactional and @Cacheable do not work inside constructors. @PostConstruct runs after full context assembly.
  • Run with --debug at least once during development on any new service — the Conditions Evaluation Report tells you what auto-configuration activated, what was skipped, and exactly why. It is the fastest path from 'why is this bean missing' to 'I understand the fix.'

⚠ Common Mistakes to Avoid

    Using field injection (@Autowired on a field) as the default everywhere
    Symptom

    Unit tests fail with NullPointerException because the @Autowired field is null without a running Spring context. Mocking requires @InjectMocks combined with @Mock and manual @ExtendWith(MockitoExtension.class) setup instead of simple constructor argument passing. Test startup time scales with codebase size because every test class that needs mocks requires a partial or full Spring context.

    Fix

    Switch to constructor injection so dependencies are explicit, final, and injectable without Spring. Mandatory dependencies become constructor parameters — the compiler enforces that all required dependencies are provided. Unit tests can use new ServiceClass(mockRepository, mockCache) directly without annotation magic. For large codebases, migrate incrementally: start with any class that is slow to test and work outward from the most-tested code.

    Ignoring bean scope when injecting a prototype-scoped bean into a singleton
    Symptom

    A bean declared as @Scope("prototype") behaves exactly like a singleton — the same instance is returned every time, carrying stale state from a previous operation. Audit tokens are shared across transactions, request-specific state leaks between users, statistical accumulators never reset. The bug is invisible in unit tests that test single operations and only surfaces under concurrent load or extended runtime.

    Fix

    Inject ObjectProvider<YourPrototypeBean> into the singleton consumer. Call provider.getObject() at the point in the code where you need a fresh instance — not in the constructor or at field injection time. Write a test that calls provider.getObject() twice in sequence and asserts that the returned references are different objects — this is the minimal verification that scope is being honored.

    Committing secrets — API keys, database passwords, signing keys — to application.properties in version control
    Symptom

    Secrets appear in Git history where they are permanent regardless of subsequent commits that remove them. Developers share the same credentials across environments. A repository access breach exposes production credentials immediately. Rotating credentials requires coordinating changes across every developer's local environment.

    Fix

    Use the ${ENV_VAR_NAME} placeholder pattern in application.properties so the actual secret value lives only in the environment at runtime. Combine with @ConfigurationProperties and @Validated so a missing environment variable causes an immediate startup failure with a clear error message rather than a null injection. For production, use Kubernetes Secrets, AWS Secrets Manager, HashiCorp Vault, or equivalent — never store actual credentials in any file that could reach version control.

    Not reading the Conditions Evaluation Report when auto-configuration behaves unexpectedly
    Symptom

    Hours spent adding beans, removing beans, adding dependencies, and restarting the application trying to force a specific auto-configuration to activate or deactivate. The actual cause — a missing classpath dependency, an existing conflicting bean, or an unset property — is recorded explicitly in the Conditions Evaluation Report that nobody has looked at.

    Fix

    Run the application with --debug or set logging.level.org.springframework.boot.autoconfigure=DEBUG as the first debugging step, not the last. The Conditions Evaluation Report lists every auto-configuration class evaluated at startup with its matched or not-matched status and the exact reason in plain English. The answer is always there — the only question is whether you read the report before or after two hours of guessing.

    Using @Value for groups of related properties instead of @ConfigurationProperties
    Symptom

    Property names are string literals scattered across dozens of classes — renaming a property in application.properties breaks silently at runtime with no compile-time error. Required properties have no validation — a missing or typo'd property injects null or an empty string, which causes a NullPointerException or a malformed HTTP request deep in the call stack rather than a clear startup error.

    Fix

    Create a @ConfigurationProperties class for each logical group of related properties, annotated with @Validated and JSR-303 constraints on required fields. The application fails immediately at startup if required properties are missing or malformed, with an error message that names the specific property. Add spring-boot-configuration-processor to the compile classpath for IDE autocomplete — developers see all available properties as they type in application.properties.

    Putting initialization logic that depends on injected beans or AOP in the constructor
    Symptom

    Database calls in a constructor fail with TransactionRequiredException because @Transactional proxy is not yet applied. Cache pre-warming fails intermittently depending on bean initialization order. Behavior differs between Spring Boot versions because initialization ordering is not guaranteed to be stable across versions.

    Fix

    Move initialization logic to a @PostConstruct method. By the time @PostConstruct runs, all dependencies are injected, all AOP proxies are applied, all @Value bindings are resolved, and all BeanPostProcessors have run. The bean is fully assembled and all Spring infrastructure is in place. @PostConstruct is the earliest safe point for initialization that uses Spring-managed infrastructure.

Interview Questions on This Topic

  • QExplain the internal mechanics of @SpringBootApplication. What are the roles of @SpringBootConfiguration, @EnableAutoConfiguration, and @ComponentScan?Mid-levelReveal
    @SpringBootApplication is a composed meta-annotation that combines three annotations into one for convenience: (1) @SpringBootConfiguration — a specialization of @Configuration that marks the class as a source of @Bean definitions. It is semantically equivalent to @Configuration but signals specifically that this is the primary Spring Boot application configuration class. (2) @EnableAutoConfiguration — this is the mechanism behind auto-configuration. It imports AutoConfigurationImportSelector, which reads META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports (Spring Boot 3.x) or META-INF/spring.factories (Spring Boot 2.x) from every JAR on the classpath. For each listed configuration class, it evaluates @ConditionalOnClass, @ConditionalOnMissingBean, @ConditionalOnProperty, and other @Conditional guards. Only classes where all conditions pass are instantiated and their @Bean methods executed. (3) @ComponentScan — tells Spring to scan the package of the annotated class and all sub-packages for @Component, @Service, @Repository, @Controller, and other stereotype annotations, registering discovered classes as beans. This is why the main application class should always be in the root package — placing it in a sub-package means @ComponentScan misses sibling packages. The practical implication: @SpringBootApplication is equivalent to writing all three annotations on the main class, but it enforces a specific pattern for where the main class lives relative to the rest of the application code.
  • QHow does the Spring Boot Actuator help in monitoring production applications? Describe specific endpoints and how you have used them.Mid-levelReveal
    Spring Boot Actuator exposes production-ready HTTP endpoints that provide runtime visibility into the application without requiring a deployment or a restart. The endpoints I rely on most: /actuator/health returns the application's health status — Kubernetes uses /actuator/health/readiness for readiness probes (is the app ready to receive traffic?) and /actuator/health/liveness for liveness probes (is the process healthy or should it be restarted?). /actuator/metrics exposes JVM metrics (jvm.memory.used, jvm.gc.pause, jvm.threads.live) and infrastructure metrics (hikaricp.connections.active, cache.gets). Querying these directly — curl http://localhost:8080/actuator/metrics/hikaricp.connections.active — is the first step in diagnosing connection pool exhaustion. /actuator/env shows all configuration properties, their resolved values, and which property source provided each value — essential for debugging environment variable override issues where a Kubernetes Secret overrides application.properties. /actuator/loggers allows changing log levels at runtime without restarting — useful for activating DEBUG logging on a specific package during a production investigation and then rolling it back without a deploy. Actuator integrates with Prometheus via the Micrometer facade — exposing /actuator/prometheus in the Prometheus scrape format enables time-series monitoring and alerting in Grafana.
  • QDescribe the Spring Bean Lifecycle. When exactly is @PostConstruct called, and why would you use it instead of a constructor?SeniorReveal
    The Spring Bean Lifecycle follows this sequence: (1) Spring instantiates the bean via its constructor — at this point, only constructor arguments are available. Field-level @Autowired dependencies are not yet injected. AOP proxies are not yet applied. @Value bindings are not yet resolved. (2) Spring injects any remaining field or setter dependencies. (3) Spring calls BeanPostProcessor before-initialization hooks — this is where AOP proxies (@Transactional, @Cacheable, @Async wrappers) are applied. (4) Spring calls @PostConstruct methods — at this point all dependencies are injected, all proxies are applied, all properties are bound. The bean is fully assembled. (5) The bean enters active service. (6) Spring calls @PreDestroy methods before destroying the bean — cleanup logic goes here. The critical difference: calling a @Transactional repository method from a constructor fails because the transactional proxy has not been applied yet. The same call in @PostConstruct succeeds because the proxy is in place. This is the most common real-world reason to use @PostConstruct — database queries, cache pre-warming, connection validation, or any initialization that relies on Spring's AOP infrastructure must be in @PostConstruct, not in the constructor.
  • QWhat is the Lombok library used for in Spring Boot applications, and when would you prefer @Builder over @Data?Mid-levelReveal
    Lombok generates boilerplate Java code at compile time via annotation processing — getters, setters, equals, hashCode, toString, and constructors — so you do not have to write and maintain them manually. @Data is a composite annotation that generates all of these. The problem with @Data on JPA entities is specific and documented: @EqualsAndHashCode generates equals/hashCode based on all fields by default, which causes infinite recursion when two entities have a bidirectional relationship and you call equals on either side. @ToString has the same issue with eager-loaded bidirectional relationships. For JPA entities, the safer pattern is @Getter, @Setter (only on mutable fields), and @EqualsAndHashCode(of = "id") — using only the primary key for equality. @Builder is preferred over @Data when the object is complex and should be constructed once and not modified — it generates a fluent builder API. For DTOs and value objects, @Builder combined with @Getter (and no @Setter) produces an effectively immutable object with a clean construction API. The answer I look for from candidates: @Data is a convenience shortcut that is safe for simple DTOs but requires careful review for JPA entities, request/response objects with bidirectional relationships, and any class where full mutability is not the intent.
  • QDesign a custom @Conditional annotation. How would you ensure a bean is only loaded when a specific environment variable is set?SeniorReveal
    Creating a custom @Conditional annotation requires two components. First, define the annotation itself: @Target({ElementType.TYPE, ElementType.METHOD}), @Retention(RetentionPolicy.RUNTIME), @Documented, and @Conditional(OnForgeEnvCondition.class). The @Conditional meta-annotation links the custom annotation to its implementation. Add an optional attribute like String name() to allow specifying which environment variable to check. Second, implement the Condition interface: public class OnForgeEnvCondition implements Condition { @Override public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) { String varName = (String) metadata.getAnnotationAttributes(OnForgeEnv.class.getName()).get("name"); return context.getEnvironment().getProperty(varName) != null; } }. Usage: @OnForgeEnv(name = "FORGE_FEATURE_PAYMENTS_ENABLED") on a @Bean method or @Configuration class. Spring evaluates the condition during bean registration — the bean is only registered if FORGE_FEATURE_PAYMENTS_ENABLED exists in the environment. This is exactly the pattern Spring Boot uses internally for all its @ConditionalOn annotations. The key points for an interview answer: use ConditionContext.getEnvironment() rather than System.getenv() directly — Environment resolves the full property source chain including system properties, environment variables, and application.properties in the correct priority order. Return true to register the bean, false to skip it. Conditions are evaluated before any beans are created, so you cannot check for existing beans inside a Condition without understanding the evaluation order implications.
  • QWhat is the difference between @ControllerAdvice and @RestControllerAdvice, and how does Spring determine which @ExceptionHandler to invoke?Mid-levelReveal
    @ControllerAdvice is a specialization of @Component that marks a class as a global exception handler. Methods in the class can handle exceptions, bind data to models, or apply transformations to all controllers. By default, handler methods return view names — the return value is treated as a view name for resolution by the ViewResolver, which is appropriate for server-rendered applications using Thymeleaf or JSP. @RestControllerAdvice is a composed annotation combining @ControllerAdvice with @ResponseBody. Every handler method automatically serializes its return value to JSON (or the negotiated content type) and writes it directly to the HTTP response body. For REST APIs — which describes the vast majority of Spring Boot applications today — @RestControllerAdvice is the correct choice. Selection of the specific @ExceptionHandler is based on the exception class hierarchy. When ForgeResourceNotFoundException (which extends ForgeException which extends RuntimeException) is thrown, Spring evaluates all @ExceptionHandler methods in the advice class and selects the most specific match — the handler whose declared exception type is closest to the actual thrown type in the inheritance hierarchy. @ExceptionHandler(ForgeResourceNotFoundException.class) is selected over @ExceptionHandler(RuntimeException.class) over @ExceptionHandler(Exception.class). This is why a catch-all @ExceptionHandler(Exception.class) works as a safety net — it is always the least specific match and only runs when no more specific handler exists.

Frequently Asked Questions

How do you implement Pagination and Sorting in Spring Boot?

Spring Data JPA handles pagination through the Pageable abstraction. Repository methods that accept a Pageable parameter and return Page<T> get automatic SQL LIMIT and OFFSET generation. Create a Pageable with PageRequest.of(pageNumber, pageSize, Sort.by("fieldName").descending()) and pass it to the repository. The returned Page<T> object contains the current page of results plus metadata: getTotalElements(), getTotalPages(), hasNext(), and hasPrevious(). Expose page and size as query parameters on your controller endpoint: public Page<ProductDto> list(@RequestParam int page, @RequestParam int size). Be mindful of deep pagination — OFFSET-based queries get progressively slower at high page numbers because the database must scan and discard all preceding rows. For high-performance pagination on large datasets, consider keyset pagination (WHERE id > :lastSeenId ORDER BY id LIMIT :size) which is O(1) regardless of page depth.

What is the difference between @RestController and @Controller?

@RestController is a composed annotation combining @Controller and @ResponseBody. Every method in a @RestController automatically serializes its return value to the HTTP response body — typically JSON via Jackson. @Controller alone is for server-rendered applications: methods return view names (strings like 'products/list') that the ViewResolver resolves to Thymeleaf templates or JSP files. If you add @ResponseBody to a specific method inside a @Controller, that method behaves like @RestController for just that method — mixing both patterns is unusual but valid for applications that serve both HTML and REST endpoints. For any application building a REST API, @RestController is the correct choice. The @ResponseBody meta-annotation is what makes the serialization happen — @RestController simply makes it the default for every method in the class.

What are Spring Boot Starters and why are they useful?

Starters are curated dependency groups packaged as single Maven or Gradle dependencies. spring-boot-starter-web, for example, pulls in Spring MVC, embedded Tomcat, Jackson for JSON, and all their compatible transitive dependencies in tested, compatible versions. Without starters, you would manually select versions of Spring MVC, Tomcat, Jackson, and their commons dependencies — and version incompatibilities between them are a real problem that starters eliminate entirely. The value is twofold: convenience (one dependency instead of eight) and compatibility (starter versions are tested together by the Spring Boot team). The cost is that each starter also brings in auto-configuration classes that are evaluated at every startup — adding starters for technologies you are not using adds evaluation overhead and can activate unexpected auto-configurations. Audit your pom.xml starters against what you actually use and remove unused ones.

What is the role of application.properties versus application.yml?

Both files serve the same purpose — externalized configuration for Spring Boot applications — and Spring Boot supports both out of the box, loading from the classpath root. application.properties uses flat key-value pairs: spring.datasource.url=jdbc:postgresql://localhost:5432/mydb. application.yml uses YAML's hierarchical indented structure, which makes deeply nested configuration more readable. The same property in YAML: spring: datasource: url: jdbc:postgresql://localhost:5432/mydb. For simple applications with few configuration values, the format is a matter of preference. For microservices with complex nested configuration — multiple data sources, detailed security settings, feature flag groups — YAML's hierarchy reduces repetition and improves readability. One practical difference: YAML does not allow tab indentation, only spaces. A tab character in a YAML file produces a parsing error at startup — a common gotcha for developers switching from properties files.

How do you debug why a specific auto-configuration class did not activate?

The Conditions Evaluation Report is the answer — always. Run the application with --debug as a command-line argument or set logging.level.org.springframework.boot.autoconfigure=DEBUG in application.properties. At startup, Spring Boot prints every auto-configuration class it evaluated, categorized as Positive matches (conditions passed, beans created), Negative matches (at least one condition failed, class skipped), and Unconditional classes (always applied). For negative matches, the report shows the exact @Conditional annotation that failed and what value it evaluated. Common reasons: @ConditionalOnClass failed because the required library JAR is not on the classpath — check your pom.xml. @ConditionalOnMissingBean failed because you defined a bean of the same type — your explicit bean won, which is intentional. @ConditionalOnProperty failed because the required property is not set or has a different value than the havingValue attribute expected — check application.properties. This report makes auto-configuration completely transparent. There is no magic — only conditions and their evaluation results.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousJava 8 Interview Questions
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged