Background Services in ASP.NET Core — IHostedService, BackgroundService and Worker Services Explained
Every non-trivial web application eventually needs to do work that no HTTP request triggers. Think about it: who sends the 'your order has shipped' email at 2am? Who cleans up expired sessions from your database? Who polls a third-party API every 30 seconds for price updates? If your answer is 'a separate console app' or 'a Windows Service', you're managing two deployment artifacts instead of one — and introducing a whole class of synchronization headaches. ASP.NET Core's hosted service model was built to solve exactly this.
Before ASP.NET Core 2.1, developers stitched together timers, threads, and Application_Start hacks to get background work done inside an ASP.NET process. It was fragile, leaked resources on shutdown, and had zero first-class support from the DI container or the application lifetime. The IHostedService interface and the BackgroundService base class changed the game by making background work a first-class citizen — with proper startup/shutdown coordination, cancellation token support, and full access to the DI container.
By the end of this article you'll be able to implement both timed background jobs and queue-consuming workers in production-quality code. You'll understand the difference between IHostedService and BackgroundService, why scoped services inside a singleton hosted service will silently give you stale data or worse, how to handle exceptions without silently killing your background loop, and exactly how the .NET Generic Host coordinates shutdown across all hosted services. Let's build it layer by layer.
IHostedService — The Contract That Everything Builds On
IHostedService is a two-method interface defined in Microsoft.Extensions.Hosting. That's it — StartAsync(CancellationToken) and StopAsync(CancellationToken). The Generic Host calls StartAsync on every registered IHostedService in registration order during startup, and StopAsync in reverse order during shutdown. This order guarantee is load-bearing — if Service B depends on Service A being ready, register A first.
StartAsync is called before the HTTP server starts accepting requests in a web application. This is intentional: if your background service needs to warm a cache before traffic hits, you can do it here and the host will wait. But watch out — if StartAsync blocks indefinitely, your app never starts. Long-running work should be kicked off onto a Task and returned from immediately, not awaited inline.
StopAsync receives a cancellation token with a configurable timeout (default 5 seconds, controlled by HostOptions.ShutdownTimeout). When SIGTERM arrives — whether from Kubernetes, a dotnet stop, or Ctrl+C — the host signals this token. Your service has until the timeout to finish gracefully. After that, the process is terminated regardless. This is why your background loops must observe cancellation tokens religiously, not just at the top level.
using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Logging; using System; using System.Threading; using System.Threading.Tasks; /// <summary> /// Warms a local in-memory cache before the HTTP server accepts any traffic. /// Because StartAsync is awaited by the host before Kestrel starts, requests /// will never see a cold cache state. /// </summary> public sealed class CacheWarmingService : IHostedService { private readonly IProductCacheService _productCache; private readonly ILogger<CacheWarmingService> _logger; public CacheWarmingService( IProductCacheService productCache, ILogger<CacheWarmingService> logger) { _productCache = productCache; _logger = logger; } // Called by the host before the HTTP server starts. // We AWAIT the cache warm here intentionally — we want it complete before traffic arrives. public async Task StartAsync(CancellationToken cancellationToken) { _logger.LogInformation("[CacheWarmingService] Warming product cache before accepting traffic..."); // Pass cancellationToken down so we can abort if the host is shutting down // before we even finish starting (e.g., rapid Ctrl+C during startup). await _productCache.WarmAsync(cancellationToken); _logger.LogInformation("[CacheWarmingService] Cache warm complete. Ready for traffic."); } // Called by the host when shutdown is signalled. // Nothing to clean up here — the cache service handles its own disposal. public Task StopAsync(CancellationToken cancellationToken) { _logger.LogInformation("[CacheWarmingService] Stopping — no cleanup required."); return Task.CompletedTask; } } // --- Registration in Program.cs --- // builder.Services.AddHostedService<CacheWarmingService>(); // builder.Services.AddSingleton<IProductCacheService, ProductCacheService>();
[CacheWarmingService] Warming product cache before accepting traffic...
info: CacheWarmingService[0]
[CacheWarmingService] Cache warm complete. Ready for traffic.
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://localhost:5001
BackgroundService — The Right Way to Write Long-Running Workers
BackgroundService is an abstract base class that implements IHostedService for you. It introduces a single abstract method: ExecuteAsync(CancellationToken stoppingToken). The base class's StartAsync implementation kicks ExecuteAsync off on a background Task and returns immediately — solving the 'don't block StartAsync' problem without you having to think about it.
The stoppingToken passed into ExecuteAsync is cancelled when the host begins its shutdown sequence. Your job is to observe that token inside your loop. The idiomatic pattern is a while (!stoppingToken.IsCancellationRequested) loop, or passing the token to every awaitable operation you call. If ExecuteAsync throws an unhandled exception, in .NET 6+ the default behaviour is to log the exception and stop the hosted service — but critically, the host process keeps running. This means your background job silently dies while your web app happily continues serving requests. We'll cover how to fix this in the gotchas section.
One subtlety that catches people out: StopAsync in BackgroundService cancels stoppingToken and then awaits the ExecuteAsync task. If your loop doesn't observe the cancellation token, StopAsync will block until HostOptions.ShutdownTimeout expires and then the process is forcibly killed — your 'graceful shutdown' isn't graceful at all.
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Logging; using System; using System.Threading; using System.Threading.Tasks; /// <summary> /// Polls an order queue every 5 seconds and processes any pending orders. /// Demonstrates the correct BackgroundService pattern including: /// - Scoped service resolution inside a singleton worker /// - Cancellation token propagation /// - Exception handling that keeps the loop alive /// </summary> public sealed class OrderProcessingWorker : BackgroundService { // We inject IServiceScopeFactory — NOT IOrderRepository directly. // BackgroundService is registered as a singleton, but IOrderRepository // is likely scoped. Injecting a scoped service into a singleton causes // the 'captured dependency' bug. IServiceScopeFactory is always safe. private readonly IServiceScopeFactory _scopeFactory; private readonly ILogger<OrderProcessingWorker> _logger; private static readonly TimeSpan PollingInterval = TimeSpan.FromSeconds(5); public OrderProcessingWorker( IServiceScopeFactory scopeFactory, ILogger<OrderProcessingWorker> logger) { _scopeFactory = scopeFactory; _logger = logger; } // ExecuteAsync is called once by BackgroundService.StartAsync on a background Task. // It runs until stoppingToken is cancelled (host shutdown) or an exception escapes. protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _logger.LogInformation("[OrderProcessingWorker] Worker started."); // Loop runs until the host signals shutdown via stoppingToken. while (!stoppingToken.IsCancellationRequested) { try { await ProcessPendingOrdersAsync(stoppingToken); } catch (OperationCanceledException) { // This is normal — stoppingToken was cancelled during an await. // Break the loop cleanly rather than logging a spurious error. _logger.LogInformation("[OrderProcessingWorker] Shutdown requested during processing."); break; } catch (Exception ex) { // Log the error but DON'T rethrow — rethrowing kills the hosted service. // Instead, we pause briefly and retry on the next iteration. // In production you'd also want alerting here (Sentry, Application Insights, etc.). _logger.LogError(ex, "[OrderProcessingWorker] Unhandled exception in processing loop. Retrying in {Interval}s.", PollingInterval.TotalSeconds); } // Task.Delay observes the cancellation token — if shutdown happens during // the delay, it throws OperationCanceledException immediately rather // than waiting out the full interval. This is what makes shutdown fast. await Task.Delay(PollingInterval, stoppingToken); } _logger.LogInformation("[OrderProcessingWorker] Worker stopped cleanly."); } private async Task ProcessPendingOrdersAsync(CancellationToken cancellationToken) { // Create a fresh DI scope per iteration — this gives us a fresh DbContext, // fresh unit-of-work, etc. Scope is disposed at end of using block. await using var scope = _scopeFactory.CreateAsyncScope(); var orderRepository = scope.ServiceProvider.GetRequiredService<IOrderRepository>(); var orderNotifier = scope.ServiceProvider.GetRequiredService<IOrderNotifier>(); var pendingOrders = await orderRepository.GetPendingOrdersAsync(cancellationToken); if (pendingOrders.Count == 0) { _logger.LogDebug("[OrderProcessingWorker] No pending orders found."); return; } _logger.LogInformation("[OrderProcessingWorker] Processing {Count} pending orders.", pendingOrders.Count); foreach (var order in pendingOrders) { // Pass cancellationToken to every async call so we can abort mid-batch on shutdown. await orderRepository.MarkAsProcessingAsync(order.Id, cancellationToken); await orderNotifier.SendConfirmationAsync(order, cancellationToken); await orderRepository.MarkAsCompleteAsync(order.Id, cancellationToken); _logger.LogInformation("[OrderProcessingWorker] Order {OrderId} processed successfully.", order.Id); } } } // --- Registration in Program.cs --- // builder.Services.AddHostedService<OrderProcessingWorker>();
[OrderProcessingWorker] Worker started.
dbug: OrderProcessingWorker[0]
[OrderProcessingWorker] No pending orders found.
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Processing 3 pending orders.
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Order a1b2c3 processed successfully.
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Order d4e5f6 processed successfully.
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Order g7h8i9 processed successfully.
# ... on Ctrl+C ...
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Shutdown requested during processing.
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Worker stopped cleanly.
Production Patterns — Channels, Scoped Services, and Crash-Safe Workers
Polling on a timer works, but it's inefficient when you have variable load. The production pattern for background processing is a producer-consumer queue using System.Threading.Channels. Your HTTP controllers or other services write work items into the channel (non-blocking), and your BackgroundService consumes from the channel as fast as it can. This gives you genuine push-based processing with backpressure support — you can cap the channel's capacity to apply backpressure to producers when the consumer falls behind.
The second production concern is crash resilience. As mentioned, an unhandled exception in ExecuteAsync silently kills your worker in .NET 6+. The fix that most teams reach for is wrapping the entire loop body in a try/catch — which we did above. But there's a more nuclear option: setting TaskScheduler.UnobservedTaskException and/or implementing IHostApplicationLifetime.StopApplication() inside your catch block to bring the whole process down intentionally. In Kubernetes, a crashed pod restarts — a silently dead worker doesn't. Sometimes a hard crash is safer than silent failure.
For production observability, track three metrics on every background worker: iterations completed, exceptions per iteration, and processing lag (time from item enqueue to item processed). These three numbers tell you everything about the health of your worker at a glance.
using System.Threading.Channels; using System.Threading; using System.Threading.Tasks; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Logging; using Microsoft.Extensions.DependencyInjection; using System; /// <summary> /// Represents a single email dispatch request written by HTTP handlers /// and consumed by the background worker. /// </summary> public sealed record EmailDispatchRequest( string RecipientAddress, string Subject, string HtmlBody, DateTimeOffset EnqueuedAt); /// <summary> /// Singleton channel that acts as the in-process message bus between /// HTTP request handlers (producers) and the email worker (consumer). /// Registered as a singleton so both producers and the worker share the same instance. /// </summary> public sealed class EmailDispatchChannel { // BoundedCapacity of 500 means the channel holds at most 500 pending emails. // If the worker falls behind, WriteAsync on producers will apply backpressure // (await until space is available) rather than silently dropping messages. private readonly Channel<EmailDispatchRequest> _channel = Channel.CreateBounded<EmailDispatchRequest>( new BoundedChannelOptions(capacity: 500) { FullMode = BoundedChannelFullMode.Wait, // Block producers rather than drop SingleReader = true, // Only the worker reads — allows optimizations SingleWriter = false // Many HTTP request threads can write }); public ChannelWriter<EmailDispatchRequest> Writer => _channel.Writer; public ChannelReader<EmailDispatchRequest> Reader => _channel.Reader; } /// <summary> /// Background worker that consumes email dispatch requests from the channel. /// Uses ChannelReader.ReadAllAsync which is the cleanest cancellation-aware /// consume pattern — it stops iteration automatically when the channel is /// completed OR the cancellation token is fired. /// </summary> public sealed class EmailDispatchWorker : BackgroundService { private readonly EmailDispatchChannel _emailChannel; private readonly IServiceScopeFactory _scopeFactory; private readonly IHostApplicationLifetime _appLifetime; private readonly ILogger<EmailDispatchWorker> _logger; public EmailDispatchWorker( EmailDispatchChannel emailChannel, IServiceScopeFactory scopeFactory, IHostApplicationLifetime appLifetime, ILogger<EmailDispatchWorker> logger) { _emailChannel = emailChannel; _scopeFactory = scopeFactory; _appLifetime = appLifetime; _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _logger.LogInformation("[EmailDispatchWorker] Starting — listening for dispatch requests."); // ReadAllAsync yields each item as it arrives, blocking asynchronously // when the channel is empty. When stoppingToken fires, the IAsyncEnumerable // stops yielding and the loop exits cleanly. await foreach (var request in _emailChannel.Reader.ReadAllAsync(stoppingToken)) { var lag = DateTimeOffset.UtcNow - request.EnqueuedAt; // Alert if emails are sitting in the queue for more than 30 seconds. if (lag > TimeSpan.FromSeconds(30)) { _logger.LogWarning( "[EmailDispatchWorker] High lag detected: {Lag:F1}s for email to {Recipient}.", lag.TotalSeconds, request.RecipientAddress); } try { await DispatchEmailAsync(request, stoppingToken); _logger.LogInformation( "[EmailDispatchWorker] Email dispatched to {Recipient} (lag: {Lag:F1}s).", request.RecipientAddress, lag.TotalSeconds); } catch (OperationCanceledException) { // Shutdown during dispatch — requeue or accept the loss depending on your SLA. _logger.LogWarning( "[EmailDispatchWorker] Cancelled mid-dispatch for {Recipient}. Item may be lost.", request.RecipientAddress); break; } catch (Exception ex) { _logger.LogError(ex, "[EmailDispatchWorker] Failed to dispatch email to {Recipient}. Continuing with next item.", request.RecipientAddress); // For truly critical workers: call _appLifetime.StopApplication() here // to crash the pod intentionally so Kubernetes restarts it. // Safer than silently skipping and accumulating failures. } } _logger.LogInformation("[EmailDispatchWorker] Stopped."); } private async Task DispatchEmailAsync(EmailDispatchRequest request, CancellationToken cancellationToken) { await using var scope = _scopeFactory.CreateAsyncScope(); var emailSender = scope.ServiceProvider.GetRequiredService<IEmailSender>(); await emailSender.SendAsync(request.RecipientAddress, request.Subject, request.HtmlBody, cancellationToken); } } // --- Registration in Program.cs --- // builder.Services.AddSingleton<EmailDispatchChannel>(); // builder.Services.AddHostedService<EmailDispatchWorker>(); // // --- Usage in a Controller or Minimal API endpoint --- // await emailChannel.Writer.WriteAsync(new EmailDispatchRequest( // RecipientAddress: "user@example.com", // Subject: "Your order is confirmed", // HtmlBody: "<h1>Thanks for your order!</h1>", // EnqueuedAt: DateTimeOffset.UtcNow), cancellationToken);
[EmailDispatchWorker] Starting — listening for dispatch requests.
info: EmailDispatchWorker[0]
[EmailDispatchWorker] Email dispatched to alice@example.com (lag: 0.1s).
info: EmailDispatchWorker[0]
[EmailDispatchWorker] Email dispatched to bob@example.com (lag: 0.2s).
warn: EmailDispatchWorker[0]
[EmailDispatchWorker] High lag detected: 34.7s for email to charlie@example.com.
info: EmailDispatchWorker[0]
[EmailDispatchWorker] Email dispatched to charlie@example.com (lag: 34.7s).
# ... on graceful shutdown ...
info: EmailDispatchWorker[0]
[EmailDispatchWorker] Stopped.
The Generic Host Shutdown Sequence — What Actually Happens at Ctrl+C
Understanding shutdown is what separates production-grade background services from ones that corrupt data on every deployment. When the host receives a termination signal (SIGTERM on Linux, Ctrl+C, or IHostApplicationLifetime.StopApplication()), here's the exact sequence:
First, IHostApplicationLifetime.ApplicationStopping fires — useful for stopping new work from being accepted. Second, IHostedService.StopAsync is called on all hosted services in reverse registration order, and all calls run concurrently. Third, the host waits up to HostOptions.ShutdownTimeout (default 5 seconds) for all StopAsync calls to complete. Fourth, IHostApplicationLifetime.ApplicationStopped fires and the process exits.
That 5-second default is almost never enough for a real worker that might be mid-batch on a database transaction. In production, increase it: builder.Services.Configure. In Kubernetes, set your pod's terminationGracePeriodSeconds to match. If your .NET shutdown timeout is 30s but Kubernetes kills the pod after 20s, you've still got a problem.
Also be aware that StopAsync is called concurrently across all services — which means if your CacheWarmingService and OrderProcessingWorker both need the database connection during shutdown, they may race. Design your StopAsync implementations to be independent.
using Microsoft.Extensions.Hosting; using Microsoft.Extensions.DependencyInjection; using System; var builder = WebApplication.CreateBuilder(args); // --- Configure shutdown timeout to 30 seconds --- // Default is 5 seconds — almost always too short for real workers. // Match this value to your Kubernetes terminationGracePeriodSeconds minus a 5s buffer. builder.Services.Configure<HostOptions>(options => { options.ShutdownTimeout = TimeSpan.FromSeconds(30); }); // --- Register infrastructure services --- builder.Services.AddSingleton<IProductCacheService, ProductCacheService>(); builder.Services.AddScoped<IOrderRepository, OrderRepository>(); builder.Services.AddScoped<IOrderNotifier, OrderNotifier>(); builder.Services.AddScoped<IEmailSender, SmtpEmailSender>(); builder.Services.AddSingleton<EmailDispatchChannel>(); // --- Register hosted services in dependency order --- // CacheWarmingService runs first (StartAsync blocks until cache is warm) // before any worker that might query the cache. builder.Services.AddHostedService<CacheWarmingService>(); builder.Services.AddHostedService<OrderProcessingWorker>(); builder.Services.AddHostedService<EmailDispatchWorker>(); // --- Add a hosted service that monitors other services and restarts them --- // Pattern: inject IHostApplicationLifetime to self-heal or escalate crashes. builder.Services.AddHostedService<WorkerHealthMonitor>(); builder.Services.AddControllers(); var app = builder.Build(); // Register a callback on ApplicationStopping to flush any in-flight telemetry // before the process exits. This runs before StopAsync on any hosted service. var lifetime = app.Services.GetRequiredService<IHostApplicationLifetime>(); lifetime.ApplicationStopping.Register(() => app.Logger.LogInformation("[Host] Shutdown initiated — flushing telemetry...")); app.MapControllers(); app.Run(); // --- WorkerHealthMonitor: a self-healing pattern --- // Demonstrates using IHostApplicationLifetime to crash intentionally on unrecoverable failure. public sealed class WorkerHealthMonitor : BackgroundService { private readonly IHostApplicationLifetime _appLifetime; private readonly ILogger<WorkerHealthMonitor> _logger; public WorkerHealthMonitor( IHostApplicationLifetime appLifetime, ILogger<WorkerHealthMonitor> logger) { _appLifetime = appLifetime; _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { // Listen for the application started event before beginning health checks. await Task.Delay(TimeSpan.FromSeconds(10), stoppingToken); while (!stoppingToken.IsCancellationRequested) { // In a real implementation, check metrics endpoints, event counters, // or a shared health flag set by other workers. bool systemHealthy = await CheckSystemHealthAsync(stoppingToken); if (!systemHealthy) { _logger.LogCritical( "[WorkerHealthMonitor] Critical subsystem failure detected. Initiating controlled shutdown."); // StopApplication triggers graceful shutdown — all StopAsync methods run, // ShutdownTimeout is respected, and the process exits cleanly. // In Kubernetes this causes a pod restart — which is what we want. _appLifetime.StopApplication(); return; } await Task.Delay(TimeSpan.FromSeconds(15), stoppingToken); } } private Task<bool> CheckSystemHealthAsync(CancellationToken cancellationToken) { // Placeholder — in production, query a health check endpoint or // check a shared failure counter from other workers. return Task.FromResult(true); } }
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
# ... on Ctrl+C ...
info: Host[0]
[Host] Shutdown initiated — flushing telemetry...
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Shutdown requested during processing.
info: OrderProcessingWorker[0]
[OrderProcessingWorker] Worker stopped cleanly.
info: EmailDispatchWorker[0]
[EmailDispatchWorker] Stopped.
info: Microsoft.Hosting.Lifetime[0]
Application is shutting down...
| Aspect | IHostedService (direct) | BackgroundService (abstract base) | Worker Service (project template) |
|---|---|---|---|
| What it is | Interface with 2 methods: StartAsync / StopAsync | Abstract class implementing IHostedService; adds ExecuteAsync | A standalone .NET host (no HTTP server) using BackgroundService |
| Best for | One-shot startup/shutdown tasks (cache warm, DB migration) | Long-running loops, timed polling, queue consumers | Dedicated microservices with no HTTP surface (pure worker processes) |
| Cancellation handling | Manual — you manage the token yourself | Automatic — stoppingToken passed into ExecuteAsync | Same as BackgroundService — it IS BackgroundService |
| HTTP server co-hosting | Yes — used inside ASP.NET Core web apps | Yes — used inside ASP.NET Core web apps | No HTTP server by default — add manually if needed |
| Startup blocking | Yes — StartAsync can intentionally block Kestrel startup | No — ExecuteAsync runs on a background Task; Kestrel starts immediately | Not applicable — no Kestrel |
| DI scope access | Inject IServiceScopeFactory for scoped services | Inject IServiceScopeFactory for scoped services | Same — identical DI rules apply |
| Exception on unhandled error | Process behaviour depends on your code | .NET 6+: worker stops, host continues (silent failure risk) | Same as BackgroundService — use IHostApplicationLifetime.StopApplication() to crash safely |
🎯 Key Takeaways
- BackgroundService's StartAsync fires your ExecuteAsync on a background Task and returns immediately — Kestrel starts without waiting for your worker loop to complete. IHostedService.StartAsync blocks Kestrel startup, which is useful for one-shot warming tasks but dangerous for long-running work.
- Always inject IServiceScopeFactory into hosted services, never scoped services directly. Create a new scope per unit of work with CreateAsyncScope() — this gives you a fresh DbContext and avoids the captured-dependency bug that causes stale data and connection leaks.
- Task.Delay(interval, stoppingToken) is non-negotiable in production loops. Omitting the token means your worker sleeps through shutdown signals, causing multi-second deployment delays and potential force-kills from Kubernetes.
- The .NET 6+ default behaviour of letting the host continue after a BackgroundService crashes is a silent failure trap. Use IHostApplicationLifetime.StopApplication() in your exception handler for critical workers so the process restarts and alerting fires — a crashed pod that restarts is far safer than a silently dead worker.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Injecting a scoped service (like DbContext or a repository) directly into a hosted service constructor — Because hosted services are singletons, the scoped service is captured at startup and reused forever, meaning you get the same DbContext instance across all iterations — leading to stale data, connection leaks, and ObjectDisposedException after the first request scope ends. Fix: inject IServiceScopeFactory instead and call CreateAsyncScope() at the start of each iteration, then resolve your scoped services from the scope.
- ✕Mistake 2: Not observing the cancellation token inside the loop body — Symptom: the app hangs for the full ShutdownTimeout on every deployment, then gets force-killed by Kubernetes. Fix: pass stoppingToken to every awaitable call inside your loop — particularly Task.Delay, database queries, HTTP calls, and channel reads. If an operation doesn't accept a CancellationToken, wrap it with a timeout using CancellationTokenSource.CreateLinkedTokenSource.
- ✕Mistake 3: Letting an unhandled exception silently kill your worker in .NET 6+ — Symptom: your background service stops processing work, no obvious error is visible, but the web app continues to serve 200 OK responses. The only evidence is a single error log line and then silence. Fix: wrap your entire loop body in try/catch, log the error, and either continue looping with a back-off delay or call _appLifetime.StopApplication() to crash intentionally so the process restarts.
Interview Questions on This Topic
- QWhat's the difference between IHostedService and BackgroundService, and when would you choose one over the other?
- QHow do you safely consume a scoped service — like an Entity Framework DbContext — from inside a BackgroundService, and why does it need special handling?
- QYour BackgroundService is processing jobs from a queue. During a Kubernetes rolling deployment, how do you ensure in-flight jobs aren't lost, and what configuration changes are needed on both the .NET and Kubernetes side?
Frequently Asked Questions
Can I run multiple background services in a single ASP.NET Core app?
Yes — call AddHostedService
How do I stop a BackgroundService from inside the service itself?
Inject IHostApplicationLifetime and call _appLifetime.StopApplication(). This triggers the host's full graceful shutdown sequence — all hosted services get their StopAsync called, ShutdownTimeout is respected, and the process exits cleanly. This is the correct pattern when an unrecoverable error means you want a pod restart rather than silent failure.
What's the difference between a BackgroundService and a .NET Worker Service project template?
BackgroundService is a class you add to any .NET host — including an ASP.NET Core web app — to run background work alongside HTTP request handling. A Worker Service is a project template that creates a Generic Host without Kestrel, designed for background-only processes with no HTTP surface. Under the hood it uses the same BackgroundService base class — it's a deployment topology choice, not a different API.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.