title: "Debugging Guide for Navius Development" description: "Comprehensive techniques and tools for debugging Navius applications" category: "Guides" tags: ["development", "debugging", "troubleshooting", "logging", "performance", "rust"] last_updated: "April 7, 2025" version: "1.0"
Debugging Guide for Navius Development
This guide provides comprehensive instructions and best practices for debugging Navius applications. Effective debugging is essential for maintaining code quality and resolving issues efficiently.
Table of Contents
- Debugging Philosophy
- Common Debugging Scenarios
- Debugging Tools
- Logging and Tracing
- Rust-Specific Debugging Techniques
- Database Debugging
- API Debugging
- Performance Debugging
- Advanced Debugging Scenarios
- Debugging in Production
Debugging Philosophy
Effective debugging in Navius development follows these principles:
- Reproduce First - Create a reliable reproduction case before attempting to fix an issue
- Isolate the Problem - Narrow down the scope of the issue
- Data-Driven Approach - Use facts, logs, and evidence rather than guesswork
- Systematic Investigation - Follow a methodical process rather than random changes
- Root Cause Analysis - Fix the underlying cause, not just the symptoms
Common Debugging Scenarios
Application Crashes
When your Navius application crashes:
- Check the Stack Trace - Identify where the crash occurred
- Examine Error Messages - Parse logs for error details
- Reproduce the Crash - Create a minimal test case
- Check for Resource Issues - Verify memory usage and system resources
- Review Recent Changes - Consider what code changed recently
Example stack trace analysis:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: DatabaseError { kind: ConnectionError, cause: Some("connection refused") }', src/services/user_service.rs:52:10
stack backtrace:
0: std::panicking::begin_panic_handler
1: std::panicking::panic_handler
2: core::panicking::panic_fmt
3: core::result::unwrap_failed
4: navius::services::user_service::UserService::find_by_id
5: navius::handlers::user_handlers::get_user
6: navius::main
This indicates:
- The crash is in
user_service.rs
line 52 - It's unwrapping a database connection error
- The connection is being refused
Runtime Errors
For non-crash errors (incorrect behavior):
- Identify the Expected vs. Actual Behavior
- Use Logging to Track Flow
- Create Unit Tests to reproduce and verify the issue
- Use Debugger Breakpoints at key decision points
Build Errors
For build failures:
- Read Compiler Messages Carefully - Rust provides detailed error messages
- Check Dependencies - Verify Cargo.toml and dependency versions
- Use Tools - Clippy can identify additional issues
- Clean and Rebuild -
cargo clean && cargo build
Debugging Tests
For test failures:
- Run Single Test - Focus on one test with
cargo test test_name
- Use
--nocapture
- See output withcargo test -- --nocapture
- Add Debugging Prints - Temporarily add print statements
- Use Test-Specific Logs - Enable debug logging during tests
Debugging Tools
IDE Debuggers
Visual Studio Code
- Setup configuration in
.vscode/launch.json
:
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug Navius Server",
"cargo": {
"args": ["build", "--bin=navius"],
"filter": {
"name": "navius",
"kind": "bin"
}
},
"args": [],
"cwd": "${workspaceFolder}",
"env": {
"RUST_LOG": "debug",
"CONFIG_DIR": "./config",
"RUN_ENV": "development"
}
},
{
"type": "lldb",
"request": "launch",
"name": "Debug Unit Tests",
"cargo": {
"args": ["test", "--no-run"],
"filter": {
"name": "navius",
"kind": "lib"
}
},
"args": [],
"cwd": "${workspaceFolder}"
}
]
}
- Set breakpoints by clicking in the gutter
- Start debugging by pressing F5 or using the Debug menu
- Use the Debug panel to:
- Step through code (F10)
- Step into functions (F11)
- View variables and their values
- Evaluate expressions in the Debug Console
JetBrains IDEs (CLion/IntelliJ with Rust plugin)
-
Create Run/Debug configurations for:
- Main application
- Specific test files
- All tests
-
Debugging features to use:
- Expression evaluation
- Memory view
- Smart step-into
- Conditional breakpoints
Command Line Debugging
For environments without IDE support, use:
-
LLDB/GDB:
# Build with debug symbols cargo build # Start debugger lldb ./target/debug/navius # Set breakpoints breakpoint set --file user_service.rs --line 52 # Run program run # After hitting breakpoint frame variable # Show variables in current frame thread backtrace # Show current stack expression user.id # Evaluate expression
-
cargo-lldb:
cargo install cargo-lldb cargo lldb --bin navius
Specialized Debugging Tools
-
Memory Analysis:
- Valgrind for memory leaks:
valgrind --leak-check=full ./target/debug/navius
- ASAN (Address Sanitizer): Build with
-Z sanitizer=address
- Valgrind for memory leaks:
-
Thread Analysis:
- Inspect thread states:
ps -T -p <PID>
- Thread contention:
perf record -g -p <PID>
- Inspect thread states:
-
Network Debugging:
- Wireshark for packet analysis
tcpdump
for network traffic capturecurl
for API request testing
Logging and Tracing
Structured Logging
Navius uses the tracing
crate for structured logging:
#![allow(unused)] fn main() { use tracing::{debug, error, info, instrument, warn}; #[instrument(skip(password))] pub async fn authenticate_user(username: &str, password: &str) -> Result<User, AuthError> { debug!("Attempting to authenticate user: {}", username); match user_repository.find_by_username(username).await { Ok(user) => { if verify_password(password, &user.password_hash) { info!("User authenticated successfully: {}", username); Ok(user) } else { warn!("Failed authentication attempt for user: {}", username); Err(AuthError::InvalidCredentials) } } Err(e) => { error!(error = ?e, "Database error during authentication"); Err(AuthError::DatabaseError(e)) } } } }
Log Levels
Use appropriate log levels:
- ERROR: Application errors requiring immediate attention
- WARN: Unexpected situations that don't cause application failure
- INFO: Important events for operational insights
- DEBUG: Detailed information useful for debugging
- TRACE: Very detailed information, typically for pinpointing issues
Configuring Logging
Set via environment variables:
# Set log level
export RUST_LOG=navius=debug,warp=info
# Log to file
export RUST_LOG_STYLE=always
export RUST_LOG_FILE=/var/log/navius.log
Or in code:
#![allow(unused)] fn main() { use tracing_subscriber::{self, fmt::format::FmtSpan, EnvFilter}; fn setup_logging() { let filter = EnvFilter::try_from_default_env() .unwrap_or_else(|_| EnvFilter::new("navius=info,warp=warn")); tracing_subscriber::fmt() .with_env_filter(filter) .with_span_events(FmtSpan::CLOSE) .with_file(true) .with_line_number(true) .init(); } }
Log Analysis
For analyzing logs:
-
Search with grep/ripgrep:
rg "error|exception" navius.log
-
Context with before/after lines:
rg -A 5 -B 2 "DatabaseError" navius.log
-
Filter by time period:
rg "2023-04-07T14:[0-5]" navius.log
-
Count occurrences:
rg -c "AUTH_FAILED" navius.log
Rust-Specific Debugging Techniques
Debug Prints
Use dbg!
macro for quick debugging:
#![allow(unused)] fn main() { // Instead of let result = complex_calculation(x, y); println!("Result: {:?}", result); // Use dbg! to show file/line and expression let result = dbg!(complex_calculation(x, y)); }
Unwrap Alternatives
Replace unwrap()
and expect()
with better error handling:
#![allow(unused)] fn main() { // Instead of let user = db.find_user(id).unwrap(); // Use more descriptive handling let user = db.find_user(id) .map_err(|e| { error!("Failed to retrieve user {}: {:?}", id, e); e })?; }
Narrowing Down Rust Compiler Errors
For complex compile errors:
- Binary Search - Comment out sections of code until error disappears
- Type Annotations - Add explicit type annotations to clarify issues
- Minimal Example - Create a minimal failing example
- Check Versions - Verify dependency versions for compatibility
Debugging Async Code
Async code can be challenging to debug:
-
Instrument async functions:
#![allow(unused)] fn main() { #[instrument(skip(request))] async fn handle_request(request: Request) -> Response { // ... } }
-
Use
output_span_events
to trace async execution:#![allow(unused)] fn main() { tracing_subscriber::fmt() .with_span_events(FmtSpan::NEW | FmtSpan::CLOSE) .init(); }
-
Inspect tasks:
#![allow(unused)] fn main() { tokio::spawn(async move { let span = tracing::info_span!("worker_task", id = %task_id); let _guard = span.enter(); // task code... }); }
Memory Analysis
For memory issues:
-
Check for leaks with
Drop
trait:#![allow(unused)] fn main() { impl Drop for MyResource { fn drop(&mut self) { debug!("MyResource being dropped: {:?}", self.id); } } }
-
Use weak references where appropriate:
#![allow(unused)] fn main() { use std::rc::{Rc, Weak}; use std::cell::RefCell; struct Parent { children: Vec<Rc<RefCell<Child>>>, } struct Child { parent: Weak<RefCell<Parent>>, } }
Database Debugging
Query Analysis
For slow or problematic database queries:
-
Query Logging - Enable PostgreSQL query logging:
# In postgresql.conf log_min_duration_statement = 100 # Log queries taking > 100ms
-
Query Explain - Use EXPLAIN ANALYZE:
EXPLAIN ANALYZE SELECT * FROM users WHERE email LIKE '%example.com';
-
Check Indexes - Verify appropriate indexes exist:
SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'users';
Connection Issues
For database connection problems:
-
Connection Pool Diagnostics:
#![allow(unused)] fn main() { // Log connection pool status info!( "DB Pool: active={}, idle={}, size={}", pool.status().await.active, pool.status().await.idle, pool.status().await.size ); }
-
Check Connection Parameters:
#![allow(unused)] fn main() { let conn_params = PgConnectOptions::new() .host(&config.db_host) .port(config.db_port) .username(&config.db_user) .password(&config.db_password) .database(&config.db_name); debug!("Connection parameters: {:?}", conn_params); }
-
Manual Connection Test:
PGPASSWORD=your_password psql -h hostname -U username -d database -c "\conninfo"
API Debugging
Request/Response Logging
For API debugging:
-
Add request/response middleware:
#![allow(unused)] fn main() { async fn log_request_response( req: Request, next: Next, ) -> Result<impl IntoResponse, (StatusCode, String)> { let path = req.uri().path().to_string(); let method = req.method().clone(); let req_id = Uuid::new_v4(); let start = std::time::Instant::now(); info!(request_id = %req_id, %method, %path, "Request received"); let response = next.run(req).await; let status = response.status(); let duration = start.elapsed(); info!( request_id = %req_id, %method, %path, status = %status.as_u16(), duration_ms = %duration.as_millis(), "Response sent" ); Ok(response) } }
-
API Testing Tools:
- Use Postman or Insomnia for manual API testing
- Create collections for common request scenarios
- Save environments for different setups (dev, test, prod)
-
Curl for quick tests:
curl -v -X POST http://localhost:3000/api/users \ -H "Content-Type: application/json" \ -d '{"username":"test", "password":"test123"}'
Performance Debugging
Identifying Performance Issues
-
Profiling with
flamegraph
:cargo install flamegraph CARGO_PROFILE_RELEASE_DEBUG=true cargo flamegraph --bin navius
-
Benchmarking with Criterion:
#![allow(unused)] fn main() { use criterion::{black_box, criterion_group, criterion_main, Criterion}; fn benchmark_user_service(c: &mut Criterion) { let service = UserService::new(/* dependencies */); c.bench_function("find_user_by_id", |b| { b.iter(|| service.find_by_id(black_box(1))) }); } criterion_group!(benches, benchmark_user_service); criterion_main!(benches); }
-
Request Timing:
#![allow(unused)] fn main() { async fn handle_request() -> Response { let timer = std::time::Instant::now(); // Handle request... let duration = timer.elapsed(); info!("Request processed in {}ms", duration.as_millis()); // Return response... } }
Common Performance Issues
-
N+1 Query Problem:
- Symptom: Multiple sequential database queries
- Solution: Use joins or batch fetching
-
Missing Indexes:
- Symptom: Slow queries with table scans
- Solution: Add appropriate indexes
-
Blocking Operations in Async Context:
- Symptom: High latency, thread pool exhaustion
- Solution: Move blocking operations to blocking task pool
#![allow(unused)] fn main() { let result = tokio::task::spawn_blocking(move || { // CPU-intensive or blocking operation expensive_calculation() }).await?; }
-
Memory Leaks:
- Symptom: Growing memory usage over time
- Solution: Check for unclosed resources, circular references
Advanced Debugging Scenarios
Race Conditions
For debugging concurrency issues:
-
Add Tracing for Async Operations:
#![allow(unused)] fn main() { #[instrument(skip(data))] async fn process_data(id: u64, data: Vec<u8>) { info!("Starting processing"); // Processing code... info!("Finished processing"); } }
-
Use Atomic Operations:
#![allow(unused)] fn main() { use std::sync::atomic::{AtomicUsize, Ordering}; static COUNTER: AtomicUsize = AtomicUsize::new(0); fn increment_counter() { let prev = COUNTER.fetch_add(1, Ordering::SeqCst); debug!("Counter incremented from {} to {}", prev, prev + 1); } }
-
Debugging Deadlocks:
- Add timeout to lock acquisitions
- Log lock acquisition/release
- Use deadlock detection in development
Memory Corruption
For possible memory corruption:
-
Use Address Sanitizer:
RUSTFLAGS="-Z sanitizer=address" cargo test
-
Check Unsafe Code:
- Review all
unsafe
blocks - Verify pointer safety
- Check lifetime correctness
- Review all
-
Foreign Function Interface Issues:
- Verify signature matches
- Check data marshaling
- Ensure proper resource cleanup
Debugging in Production
Safe Production Debugging
-
Structured Logging:
- Use context-rich structured logs
- Include correlation IDs for request tracing
- Log adequate information without sensitive data
-
Metrics and Monitoring:
- Track key performance indicators
- Set up alerts for anomalies
- Use distributed tracing for complex systems
-
Feature Flags:
- Enable additional logging in production for specific issues
#![allow(unused)] fn main() { if feature_flags.is_enabled("enhanced_auth_logging") { debug!("Enhanced auth logging: {:?}", auth_details); } }
Post-Mortem Analysis
For analyzing production issues after they occur:
-
Log Aggregation:
- Collect logs centrally
- Use tools like ELK Stack or Grafana Loki
- Create dashboards for common issues
-
Error Tracking:
- Integrate with error tracking services
- Group similar errors
- Track error rates and trends
-
Core Dumps:
- Enable core dumps in production
- Secure sensitive information
- Analyze with
rust-gdb