title: Navius Documentation description: Comprehensive documentation for the Navius framework category: index tags:
- documentation
- index
- overview related:
- 01_getting_started/
- 04_guides/
- 05_reference/
- 03_contributing/
- 02_examples/ last_updated: March 27, 2025 version: 1.0
Navius Documentation
This repository contains the official documentation for the Navius framework, providing comprehensive guides, tutorials, references, and examples for developers building applications with Navius.
Documentation Structure
The documentation is organized into clear sections to help you find what you need:
Getting Started
Everything you need to start using Navius, including installation, quickstart guide, and basic concepts.
Examples
Practical code examples demonstrating how to implement common features and solve typical challenges.
- Basic application examples
- Integration examples
- Advanced feature implementations
- Sample projects
Contributing
Guidelines for contributing to Navius, including code standards, pull request process, and development workflow.
Guides
Comprehensive guides for developing applications with Navius, organized by topic.
-
Development
-
Features
-
Security
-
Performance
Reference
Detailed technical reference documentation for Navius APIs, configuration options, and patterns.
-
API Reference
-
Configuration Reference
-
Patterns
Documentation Highlights
- Comprehensive API Reference: Detailed documentation for all Navius APIs with request/response examples, error handling, and integration patterns.
- Step-by-Step Guides: Clear, actionable guides for implementing common features and best practices.
- Practical Examples: Real-world code examples that demonstrate how to use Navius effectively.
- Development Best Practices: Guidance on IDE setup, testing, debugging, and performance optimization.
- Security Implementation: Detailed guides on implementing authentication, authorization, and data protection.
Using the Documentation
For New Users
If you're new to Navius, start with:
- Installation Guide to set up Navius
- Quickstart Guide to create your first application
- Hello World Tutorial for a step-by-step walkthrough
For Regular Developers
If you're already using Navius:
- Explore the Guides for implementing specific features
- Refer to the API Reference for detailed technical information
- Check out Examples for code samples and patterns
For Contributors
To contribute to Navius:
- Read the Contribution Guidelines
- Follow the Development Workflow
- Submit your changes following the Pull Request Process
Documentation Updates
This documentation is continuously improved. Recent updates include:
- Enhanced API reference documentation with comprehensive examples
- New comprehensive security guides
- Improved development setup and IDE configuration guidance
- Expanded testing and debugging documentation
Support
If you have questions about using Navius or need help with the documentation:
- GitHub Issues for bug reports and feature requests
- Discord Community for community support and discussions
- Stack Overflow using the 'navius' tag
License
This documentation is licensed under the MIT License.
π Documentation Sections
π Getting Started
Quick start guides to get up and running with Navius:
- Installation - How to install Navius
- Development Setup - Setting up your development environment
- First Steps - Getting started with Navius
π Examples
Practical code examples:
- Overview - Introduction to examples
- Spring Boot Comparison - Comparing with Spring Boot
- Two-Tier Cache Implementation - Implementing two-tier caching
- Server Customization System - Using the feature system
- Repository Pattern Example - Implementing the generic repository pattern
- Logging Service Example - Using the generic logging service
- Database Service Example - Working with the generic database service
- Health Service Example - Creating custom health indicators
- Cache Provider Example - Using the generic cache providers
π€ Contributing
Guidelines for contributors:
- Overview - Introduction to contributing
- Contributing Guide - How to contribute
- Code of Conduct - Community guidelines
- Development Process - Development workflow
- Testing Guidelines - Writing tests
- Onboarding - Getting started as a contributor
- IDE Setup - Setting up your development environment
- Testing Prompt - Testing guidelines
- Test Implementation Template - Templates for tests
π οΈ Guides
Practical guides for using Navius:
-
Overview - Introduction to Navius guides
-
Development - Development workflow and practices
- Development Workflow - Day-to-day development process
- Testing Guide - How to test Navius applications
- Debugging Guide - Debugging your applications
- IDE Setup - Setting up your development environment
- Git Workflow - Version control practices
-
Features - Implementing specific features
- Authentication - Implementing authentication
- API Integration - Integrating with external APIs
- PostgreSQL Integration - Working with PostgreSQL in features
- Redis Caching - Implementing basic caching
- Server Customization CLI - Using the feature selection CLI
- WebSocket Support - Real-time communication
-
Deployment - Deploying Navius applications
- Production Deployment - Deploying to production
- Docker Deployment - Working with Docker
- AWS Deployment - Deploying to AWS
- Kubernetes Deployment - Deploying to Kubernetes
-
Caching Strategies - Advanced caching with two-tier cache
-
PostgreSQL Integration - Comprehensive PostgreSQL integration
-
Application Structure - App structure guide
-
Configuration - Configuration guide
-
Dependency Injection - DI guide
-
Error Handling - Error handling guide
-
Feature Selection - Feature selection guide
-
Service Registration - Service registration guide
-
Testing - Testing guide
π Reference
Technical reference documentation:
-
Overview - Introduction to reference documentation
-
API - API documentation
- API Resources - Core API resources
- Authentication API - Authentication endpoints
- Database API - Database interaction APIs
-
Architecture - Architecture patterns and principles
- Principles - Architectural principles
- Project Structure - Project structure overview
- Project Structure Recommendations - Recommended structure
- Directory Organization - How directories are organized
- Component Architecture - Component design
- Design Principles - Design principles
- Extension Points - Extension points
- Module Dependencies - Module dependencies
- Provider Architecture - Provider architecture
- Service Architecture - Service architecture
- Spring Boot Migration - Spring Boot migration
-
Auth - Authentication documentation
- Error Handling - Auth error handling
- Auth Circuit Breaker - Auth circuit breaker
- Auth Metrics - Auth metrics
- Auth Provider Implementation - Auth provider implementation
-
Configuration - Configuration options and settings
- Environment Variables - Environment configuration
- Application Config - Application settings
- Cache Config - Cache system configuration
- Feature Config - Server customization configuration
- Logging Config - Logging configuration
- Security Config - Security settings
-
Patterns - Common design patterns
- API Resource Pattern - API design patterns
- Import Patterns - Module import patterns
- Caching Patterns - Effective caching strategies
- Error Handling - Error handling approaches
- Testing Patterns - Testing best practices
- Repository Pattern - Entity repository pattern
- Logging Service Pattern - Generic logging service implementations
-
Standards - Code and documentation standards
- Naming Conventions - Naming guidelines
- Code Style - Code formatting standards
- Generated Code - Generated code guidelines
- Security Standards - Security best practices
- Documentation Standards - Documentation guidelines
- Configuration Standards - Configuration standards
- Error Handling Standards - Error handling standards
- Error Handling - Error handling guide
-
Generated - Generated reference documentation
- API Index - API index
- Configuration Index - Configuration index
- Development Configuration - Development configuration
- Production Configuration - Production configuration
- Testing Configuration - Testing configuration
- Features Index - Features index
πΊοΈ Roadmaps
Project roadmaps and future plans:
- Overview - Introduction to project roadmaps
- Template for Updating - How to update roadmaps
- Dependency Injection - DI implementation roadmap
- Database Integration - Database features roadmap
- Testing Framework - Testing capabilities roadmap
𧩠Miscellaneous
Additional resources and documentation:
- Feature System - Overview of the feature system
- Testing Guidance - Additional testing guidance
- Document Template - Documentation template
- Migration Plan - Documentation migration plan
π Documentation Search
Use the search functionality in the top bar to search through all documentation, or use your browser's search (Ctrl+F / Cmd+F) to search within the current page.
π Documentation Standards
All documentation follows these standards:
- Frontmatter: Each document includes metadata in the YAML frontmatter
- Structure: Clear headings and subheadings with logical progression
- Code Examples: Practical examples with syntax highlighting
- Cross-referencing: Links to related documentation
- Up-to-date: Regular reviews and updates to ensure accuracy
π Need Help?
If you can't find what you're looking for, please:
- Check the GitLab Issues for known documentation issues
- Open a new documentation issue if you find something missing or incorrect
title: Getting Started with Navius description: "Complete introduction and quick start guides for the Navius framework, including installation, setup, and building your first application" category: getting-started tags:
- introduction
- installation
- setup
- quickstart
- development
- tutorial related:
- installation.md
- quickstart.md
- development-setup.md
- first-steps.md
- hello-world.md
- ../04_guides/development/development-workflow.md
- ../05_reference/architecture/principles.md last_updated: April 8, 2025 version: 1.2 status: active
Getting Started with Navius
Overview
Welcome to Navius! This section provides everything you need to start building high-performance, maintainable applications with the Navius framework. Whether you're new to Rust or an experienced developer, these guides will help you quickly set up your environment and build your first application.
Navius is a modern, opinionated web framework for Rust that combines the performance benefits of Rust with the developer experience of frameworks like Spring Boot. It provides built-in support for dependency injection, configuration management, API development, and more.
Quick Navigation
- Quickstart Guide - Get up and running in minutes
- Installation Guide - Set up Navius and its dependencies
- Development Setup - Configure your development environment
- First Steps - Create your first Navius application
- Hello World Tutorial - Build a simple REST API
Getting Started in 5 Minutes
For experienced developers who want to dive right in:
# Install Navius (requires Rust 1.70+)
git clone https://github.com/your-organization/navius.git
cd navius
# Build the framework
cargo build
# Run the development server
./run_dev.sh
# Create a new project (optional)
cargo new --bin my-navius-app
cd my-navius-app
# Add Navius dependency to Cargo.toml
# [dependencies]
# navius = { path = "../navius" }
# tokio = { version = "1", features = ["full"] }
# axum = "0.6"
Prerequisites
Before you begin with Navius, ensure you have:
-
Rust (version 1.70.0 or later)
- Install from rust-lang.org
- Verify with
rustc --version
-
Development Environment
- A code editor or IDE (VS Code or JetBrains CLion recommended)
- Git for version control
- Terminal/command-line access
-
Recommended Knowledge
- Basic Rust programming concepts
- Familiarity with web development concepts (HTTP, REST, APIs)
- Understanding of asynchronous programming principles
Installation Options
Navius offers multiple installation methods to fit your workflow:
Option 1: Using Cargo (Simplest)
cargo install navius
This installs the Navius CLI tool, allowing you to create and manage Navius projects.
Option 2: From Source (Recommended for Development)
git clone https://github.com/your-organization/navius.git
cd navius
cargo install --path .
This approach gives you access to the latest features and allows you to contribute to the framework.
Option 3: As a Dependency in Your Project
Add to your Cargo.toml
:
[dependencies]
navius = "0.1.0"
tokio = { version = "1", features = ["full"] }
axum = "0.6.0"
Recommended Development Setup
For the best development experience, we recommend:
1. Development Tools
-
VS Code with these extensions:
- rust-analyzer
- Even Better TOML
- crates
- LLDB Debugger
-
Terminal Tools:
cargo-watch
for auto-reloading (cargo install cargo-watch
)cargo-expand
for macro debugging (cargo install cargo-expand
)cargo-edit
for dependency management (cargo install cargo-edit
)
2. Environment Setup
- Docker for containerized development (databases, Redis, etc.)
- Git with pre-commit hooks (as described in Development Setup)
- Environment Configuration (custom
.env
files for different environments)
See the Development Setup guide for detailed instructions.
Learning Path
We recommend following this path to learn Navius effectively:
1. Basic Concepts (Start Here)
- Complete the Installation Guide
- Set up your environment with Development Setup
- Build your first app with First Steps
- Try the Hello World Tutorial
2. Core Framework Concepts
- Learn about dependency injection and service architecture
- Understand configuration management
- Explore routing and middleware
- Master error handling and logging
3. Advanced Topics
- Database integration with SQLx or Diesel
- Authentication and authorization
- Testing strategies
- Deployment considerations
Common Tasks
Here's a quick reference for common Navius development tasks:
Create a New Navius Application
# With Navius CLI
navius new my-project
cd my-project
# Or manually with Cargo
cargo new --bin my-project
cd my-project
# Then add Navius to dependencies
Run Your Navius Application
# Using the development script
./run_dev.sh
# With hot reloading
./run_dev.sh --watch
# Manually with cargo
cargo run
Test Your Application
# Run all tests
cargo test
# Run specific tests
cargo test --package navius --lib -- app::hello::tests
# Run tests with coverage (requires cargo-tarpaulin)
cargo tarpaulin --out Html
Build for Production
# Build optimized binary
cargo build --release
# Run in production mode
./run_prod.sh
Navius Framework Structure
Understanding the framework structure helps navigate the documentation:
navius/
βββ src/
β βββ app/ # Your application code goes here
β βββ core/ # Framework core components
β βββ lib.rs # Library definition
β βββ main.rs # Entry point
βββ config/ # Configuration files
βββ tests/ # Integration tests
βββ docs/ # Documentation
Key Concepts
Navius is built around these core principles:
- Modularity: Components are organized into cohesive modules
- Dependency Injection: Services are registered and injected where needed
- Configuration-Driven: Application behavior is controlled via configuration
- Convention over Configuration: Sensible defaults with flexibility to override
- Testability: First-class support for testing at all levels
Troubleshooting
If you encounter issues during setup:
Issue | Solution |
---|---|
Build failures | Ensure you have the correct Rust version and dependencies |
Missing libraries | Check OS-specific requirements in the Installation Guide |
Configuration errors | Verify your config files match the expected format |
Runtime errors | Check logs and ensure all required services are running |
Support Resources
Need help with Navius?
- Documentation: Comprehensive guides in this documentation site
- Community: Join our Discord community
- GitHub Issues: Report bugs or suggest features on our repository
- Stack Overflow: Ask questions with the
navius
tag
Contributing
We welcome contributions to Navius! Here's how to get involved:
- Read our Contributing Guidelines
- Set up your development environment
- Pick an issue from our tracker or propose a new feature
- Submit a pull request with your changes
Next Steps
Ready to explore more?
- Examples - See Navius in action with practical examples
- Guides - In-depth guides on specific features
- Reference - Detailed API and architecture reference
- Roadmap - See what's coming in future releases
title: Navius Installation Guide description: Comprehensive guide for installing, configuring, and running Navius in different environments category: getting-started tags:
- installation
- setup
- configuration
- prerequisites
- deployment related:
- development-setup.md
- first-steps.md
- hello-world.md
- ../04_guides/deployment/README.md last_updated: March 27, 2025 version: 1.1 status: active
Navius Installation Guide
Overview
This guide provides comprehensive instructions for installing, configuring, and running Navius across different environments. It covers prerequisites, installation steps, configuration options, and verification procedures.
Prerequisites
- Rust (1.70.0 or later)
- Check version with
rustc --version
- Install from rust-lang.org
- Check version with
- Cargo (included with Rust)
- Check version with
cargo --version
- Check version with
- Git (2.30.0 or later)
- Check version with
git --version
- Install from git-scm.com
- Check version with
- OpenAPI Generator CLI (for API client generation)
- PostgreSQL (optional, for database functionality)
- Version 14 or later recommended
- Docker (for containerized setup)
- Redis (optional, for caching functionality)
Quick Start
For those familiar with Rust development, here's the quick setup process:
# Clone the repository
git clone https://github.com/your-organization/navius.git
cd navius
# Install dependencies
cargo build
# Create environment config
cp .env.example .env
# Run the application in development mode
./run_dev.sh
The server will start on http://localhost:3000 by default.
Installation
1. Clone the Repository
git clone https://github.com/your-organization/navius.git
cd navius
2. Install Dependencies
Install all required dependencies using Cargo:
cargo build
This will download and compile all dependencies specified in the Cargo.toml
file.
Configuration
Navius uses a layered configuration approach, providing flexibility across different environments.
YAML Configuration Files
config/default.yaml
- Base configuration for all environmentsconfig/development.yaml
- Development-specific settingsconfig/production.yaml
- Production-specific settingsconfig/local.yaml
- Local overrides (not in version control)config/local-{env}.yaml
- Environment-specific local overrides
Environment Variables
Create a .env
file in the project root:
cp .env.example .env
Edit with at minimum:
# Environment selection
RUN_ENV=development
# Essential environment variables
RUST_LOG=${APP_LOG_LEVEL:-info}
# Database configuration (if needed)
DATABASE_URL=postgres://username:password@localhost:5432/navius
# Secrets (if needed)
JWT_SECRET=your_jwt_secret_here
# API_KEY=your_api_key_here
Environment variables can also be used to override any configuration value from the YAML files, providing a secure way to manage sensitive information in production environments.
Database Setup
Local Development Database
Option 1: Using Docker (Recommended)
For local development, you can use Docker to run a PostgreSQL instance:
# From the project root:
cd test/resources/docker
docker-compose -f docker-compose.dev.yml up -d
This will create a PostgreSQL database accessible at:
- Host: localhost
- Port: 5432
- User: postgres
- Password: postgres
- Database: app
Option 2: Direct PostgreSQL Setup
If you prefer to set up PostgreSQL directly:
psql -c "CREATE DATABASE navius;"
psql -c "CREATE USER navius_user WITH ENCRYPTED PASSWORD 'your_password';"
psql -c "GRANT ALL PRIVILEGES ON DATABASE navius TO navius_user;"
Database Configuration
To use with the application, ensure your config/development.yaml
has the database section enabled:
database:
enabled: true
url: "postgres://postgres:postgres@localhost:5432/app"
max_connections: 10
connect_timeout_seconds: 30
idle_timeout_seconds: 300
Note: This configuration is for local development only. Production deployments should use a managed database service like AWS RDS with appropriate security settings.
Run Migrations
Initialize the database schema:
cargo run --bin migration
Running the Server
Navius provides several ways to run the server, optimized for different scenarios.
Using the Development Script (Recommended)
For development:
./run_dev.sh
The development script supports several options:
./run_dev.sh [OPTIONS]
Options:
--skip-gen
- Skip API model generation--release
- Build and run in release mode--config-dir=DIR
- Use specified config directory (default: config)--env=FILE
- Use specified .env file (default: .env)--environment=ENV
- Use specified environment (default: development)--port=PORT
- Specify server port (default: 3000)--watch
- Restart server on file changes--run-migrations
- Run database migrations before starting--no-health-check
- Skip health check validation after startup--no-hooks
- Skip git hooks setup--help
- Show help message
Using the Wrapper Script
# For development (default)
./run.sh
# For production
./run.sh --prod
This wrapper script automatically chooses the appropriate environment script based on the --dev
or --prod
flag.
Manual Run
If you prefer to run the server manually (note that this may not include all setup steps performed by the run_dev.sh script):
cargo run
The server will start on http://localhost:3000 by default.
Verification
To verify that Navius has been installed correctly:
- Start the application in development mode:
./run_dev.sh
- Open your browser and navigate to:
http://localhost:3000/actuator/health
You should see a health check response indicating the application is running.
Core Endpoints
Navius provides these built-in endpoints:
GET /health
- Basic health check endpointGET /metrics
- Prometheus metrics endpointGET /actuator/health
- Detailed health check with component statusGET /actuator/info
- System informationGET /docs
- OpenAPI documentation (Swagger UI)
API Documentation
API documentation is automatically generated and available at http://localhost:3000/docs when the server is running. The documentation includes:
- All API endpoints with descriptions
- Request/response schemas
- Authentication requirements
- Example requests and responses
Troubleshooting
Common Issues
Issue | Solution |
---|---|
Compiler errors | Ensure you have the correct Rust version (rustc --version ) |
Database connection errors | Check your .env file and database credentials |
Port conflicts | Ensure port 3000 is not in use by another application |
Cargo build failures | Try cargo clean followed by cargo build |
Missing dependencies | Install missing system dependencies (e.g., OpenSSL) |
Database Connection Issues
If you encounter database connection issues:
- Ensure PostgreSQL is running:
docker ps
orpg_isready
- Verify the database exists:
psql -l
- Check your connection string in
.env
- Ensure firewall settings allow the connection
Next Steps
- Continue to First Steps to learn about basic Navius concepts
- Try building a Hello World Application
- Set up your Development Environment for contributing
Related Documents
- Development Setup - Next steps after installation
- Development Workflow - Understanding the development process
- Deployment Guide - Production deployment instructions
title: Navius Quickstart Guide description: "A rapid introduction to get you up and running with Navius in minutes" category: getting-started tags:
- quickstart
- setup
- installation
- tutorial
- beginners related:
- installation.md
- development-setup.md
- first-steps.md
- hello-world.md last_updated: April 8, 2025 version: 1.0
Navius Quickstart Guide
This quickstart guide will help you build and run your first Navius application in just a few minutes. For more detailed information, see our comprehensive installation guide and development setup.
Prerequisites
Before you begin, ensure you have:
- Rust (version 1.70 or later)
- Git
- Docker (optional, for database services)
- A code editor or IDE (VS Code or JetBrains CLion recommended)
Verify your Rust installation:
rustc --version
# Should show rustc 1.70.0 or later
Step 1: Clone the Navius Template
The fastest way to get started is using our template project:
git clone https://github.com/navius/navius-template.git my-navius-app
cd my-navius-app
Step 2: Launch the Development Environment
Start the development environment, which includes PostgreSQL and Redis:
# Start required services
docker-compose up -d
# Verify services are running
docker-compose ps
Step 3: Configure Your Environment
The template includes a sample configuration file. Create a development environment file:
cp .env.example .env.development
Open .env.development
and update the settings as needed:
DATABASE_URL=postgres://navius:navius@localhost:5432/navius_dev
REDIS_URL=redis://localhost:6379/0
LOG_LEVEL=debug
SERVER_PORT=3000
Step 4: Build and Run
Build and run your Navius application:
# Build the application
cargo build
# Run in development mode
cargo run
You should see output similar to:
[INFO] Navius Framework v0.8.1
[INFO] Loading configuration from .env.development
[INFO] Initializing database connection
[INFO] Starting Navius server on http://127.0.0.1:3000
[INFO] Server started successfully. Press Ctrl+C to stop.
Step 5: Explore Your Application
Your application is now running! Open a web browser and navigate to:
- API: http://localhost:3000/api
- API Documentation: http://localhost:3000/api/docs
- Health Check: http://localhost:3000/health
Step 6: Make Your First Change
Let's modify the application to add a custom endpoint:
- Open
src/routes/mod.rs
and add a new route:
#![allow(unused)] fn main() { // ... existing code ... pub fn configure_routes(app: &mut ServiceBuilder) -> &mut ServiceBuilder { app .route("/", get(handlers::index)) .route("/hello", get(hello_world)) // Add this line .route("/health", get(handlers::health_check)) // ... other routes ... } // Add this function async fn hello_world() -> impl IntoResponse { Json(json!({ "message": "Hello from Navius!", "timestamp": chrono::Utc::now().to_rfc3339() })) } }
- Save the file and restart the server:
# Stop the running server with Ctrl+C, then run again
cargo run
- Visit your new endpoint at http://localhost:3000/hello
Step 7: Next Steps
Congratulations! You've successfully:
- Set up a Navius development environment
- Run your first Navius application
- Added a custom endpoint
What to Try Next
- Create a more complex REST API
- Learn about dependency injection
- Explore database integration
- Check out the hello world tutorial for a step-by-step project
Common Issues
Could not connect to database
Problem: The server fails to start with database connection errors.
Solution: Ensure Docker is running and containers are up with docker-compose ps
. Verify database credentials in .env.development
.
Port already in use
Problem: The server fails to start because port 3000 is already in use.
Solution: Change the SERVER_PORT
in .env.development
or stop the other application using port 3000.
Cargo build fails
Problem: The build process fails with dependency or compilation errors.
Solution: Ensure you're using Rust 1.70+ with rustc --version
. Run cargo update
to update dependencies.
Getting Help
If you encounter any issues:
- Check the troubleshooting guide
- Visit our community forum
- Join the Discord server for real-time help
- Read the detailed documentation in this site
Additional Resources
title: First Steps with Navius description: Guide to creating your first Navius application and understanding key concepts category: getting-started tags:
- tutorial
- quickstart
- firstapp
- endpoints
- configuration related:
- installation.md
- development-setup.md
- hello-world.md
- ../04_guides/development/development-workflow.md
- ../02_examples/rest-api-example.md last_updated: March 28, 2025 version: 1.1 status: active
First Steps with Navius
Overview
This guide walks you through creating your first Navius application. You'll learn how to set up a basic API endpoint, understand the project structure, configure the application, and run your first tests. By the end, you'll have a solid foundation for building more complex applications with Navius.
Prerequisites
Before starting this guide, ensure you have:
- Completed the Installation Guide
- Set up your development environment following the Development Setup guide
- Basic knowledge of Rust programming language
- Familiarity with RESTful APIs
- A terminal or command prompt open in your Navius project directory
Installation
To install the components needed for this guide:
# Clone the Navius repository
git clone https://github.com/your-organization/navius.git
cd navius
# Install dependencies
cargo build
For the complete installation process, refer to the Installation Guide.
Configuration
Configure the application with the following settings:
# config/default.yaml - Base configuration file
server:
port: 3000
host: "127.0.0.1"
timeout_seconds: 30
logging:
level: "info"
format: "json"
Key configuration options:
- Environment variables can override any configuration value
config/development.yaml
contains development-specific settingsconfig/production.yaml
contains production-specific settings- Create a
.env
file for local environment variables
For more detailed configuration information, see the Configuration Guide.
Quick Start
For experienced developers who want to get started quickly:
# Clone the Navius repository (if not already done)
git clone https://github.com/your-organization/navius.git
cd navius
# Create a new module for your endpoint
mkdir -p src/app/hello
touch src/app/hello/mod.rs
# Add your endpoint code (see Section 2 below)
# Register your module in src/app/mod.rs
# Add your routes to src/app/router.rs
# Run the application
./run_dev.sh
# Test your endpoint
curl http://localhost:3000/hello
1. Understanding the Project Structure
Let's explore the Navius project structure to understand how the framework is organized:
navius/
βββ src/ # Application source code
β βββ app/ # Application-specific code
β β βββ controllers/ # Request handlers
β β βββ models/ # Data models
β β βββ services/ # Business logic
β β βββ router.rs # Route definitions
β β βββ mod.rs # Module exports
β βββ core/ # Core framework components
β β βββ config/ # Configuration handling
β β βββ error/ # Error handling
β β βββ logging/ # Logging functionality
β β βββ server/ # Server implementation
β βββ lib.rs # Library entry point
β βββ main.rs # Application entry point
βββ config/ # Configuration files
β βββ default.yaml # Default configuration
β βββ development.yaml # Development environment config
β βββ production.yaml # Production environment config
βββ tests/ # Integration tests
βββ docs/ # Documentation
βββ .devtools/ # Development tools and scripts
Key Directories
The most important directories for your development work are:
Directory | Purpose |
---|---|
src/app/ | Where you'll add your application-specific code |
src/core/ | Core framework components (generally don't modify directly) |
config/ | Configuration files for your application |
tests/ | Integration tests for your application |
Navius Architecture
Navius follows a layered architecture:
- Router Layer - Defines HTTP routes and connects them to controllers
- Controller Layer - Handles HTTP requests and responses
- Service Layer - Contains business logic
- Repository Layer - Interfaces with data storage
This separation of concerns makes your code more maintainable and testable.
2. Creating Your First Endpoint
Let's create a simple "hello world" API endpoint:
Step 1: Create a New Module
Create a file at src/app/hello/mod.rs
with the following content:
#![allow(unused)] fn main() { use axum::{routing::get, Router}; /// Returns the routes for the hello module pub fn routes() -> Router { Router::new().route("/hello", get(hello_handler)) } /// Handler for the hello endpoint async fn hello_handler() -> String { "Hello, Navius!".to_string() } #[cfg(test)] mod tests { use super::*; use axum::body::Body; use axum::http::{Request, StatusCode}; use tower::ServiceExt; #[tokio::test] async fn test_hello_endpoint() { // Arrange let app = routes(); let request = Request::builder() .uri("/hello") .method("GET") .body(Body::empty()) .unwrap(); // Act let response = app.oneshot(request).await.unwrap(); // Assert assert_eq!(response.status(), StatusCode::OK); let body = hyper::body::to_bytes(response.into_body()).await.unwrap(); let body_str = String::from_utf8(body.to_vec()).unwrap(); assert_eq!(body_str, "Hello, Navius!"); } } }
This code:
- Creates a route that responds to GET requests at
/hello
- Sets up a simple handler that returns a greeting message
- Includes a test to verify the endpoint works correctly
Step 2: Register the Module
Update src/app/mod.rs
to include your new module:
#![allow(unused)] fn main() { pub mod hello; // ... other existing modules // ... existing code }
Step 3: Add the Route to the Application Router
Update src/app/router.rs
to include your hello routes:
#![allow(unused)] fn main() { use crate::app::hello; // ... other existing imports pub fn app_router() -> Router { Router::new() // ... existing routes .merge(hello::routes()) } }
3. Running Your Application
Now that you've created your first endpoint, let's run the application:
./run_dev.sh
This will:
- Compile your code
- Start the server on http://localhost:3000
- Enable hot reloading if you used the
--watch
option
You should see output similar to:
[INFO] Navius starting in development mode...
[INFO] Server listening on 127.0.0.1:3000
4. Testing Your Endpoint
Manual Testing
You can test your new endpoint using curl
from the command line:
curl http://localhost:3000/hello
You should see the response:
Hello, Navius!
Or using a browser, navigate to:
http://localhost:3000/hello
Automated Testing
You can run the unit test you created:
cargo test --package navius --lib -- app::hello::tests::test_hello_endpoint
Or run all tests:
cargo test
5. Working with Configuration
Now, let's modify our endpoint to use configuration values, demonstrating how to work with Navius's configuration system.
Step 1: Update the Configuration File
Edit config/default.yaml
to add our greeting configuration:
# ... existing configuration
# Custom greeting configuration
greeting:
message: "Hello, Navius!"
language: "en"
options:
- "Welcome"
- "Greetings"
- "Hey there"
Step 2: Use the Configuration in Your Handler
Update src/app/hello/mod.rs
:
#![allow(unused)] fn main() { use axum::{routing::get, Router, extract::State}; use crate::core::config::AppConfig; use std::sync::Arc; /// Routes for the hello module pub fn routes() -> Router { Router::new() .route("/hello", get(hello_handler)) .route("/hello/:name", get(hello_name_handler)) } /// Handler for the basic hello endpoint async fn hello_handler(State(config): State<Arc<AppConfig>>) -> String { config.get_string("greeting.message") .unwrap_or_else(|_| "Hello, Navius!".to_string()) } /// Handler that greets a specific name async fn hello_name_handler( axum::extract::Path(name): axum::extract::Path<String>, State(config): State<Arc<AppConfig>> ) -> String { let greeting = config.get_string("greeting.message") .unwrap_or_else(|_| "Hello".to_string()); format!("{}, {}!", greeting, name) } // ... existing test code }
Step 3: Run with the Updated Configuration
./run_dev.sh
Now test both endpoints:
curl http://localhost:3000/hello
curl http://localhost:3000/hello/Developer
6. Adding a JSON Response
Let's enhance our endpoint to return a JSON response, which is common in modern APIs.
Step 1: Update Your Handler with JSON Support
Modify src/app/hello/mod.rs
:
#![allow(unused)] fn main() { use axum::{routing::get, Router, extract::State, Json}; use serde::{Serialize, Deserialize}; use crate::core::config::AppConfig; use std::sync::Arc; /// Response model for greetings #[derive(Serialize)] struct GreetingResponse { message: String, timestamp: String, } /// Routes for the hello module pub fn routes() -> Router { Router::new() .route("/hello", get(hello_handler)) .route("/hello/:name", get(hello_name_handler)) .route("/hello-json/:name", get(hello_json_handler)) } /// Handler for the basic hello endpoint async fn hello_handler(State(config): State<Arc<AppConfig>>) -> String { config.get_string("greeting.message") .unwrap_or_else(|_| "Hello, Navius!".to_string()) } /// Handler that greets a specific name async fn hello_name_handler( axum::extract::Path(name): axum::extract::Path<String>, State(config): State<Arc<AppConfig>> ) -> String { let greeting = config.get_string("greeting.message") .unwrap_or_else(|_| "Hello".to_string()); format!("{}, {}!", greeting, name) } /// Handler that returns a JSON greeting async fn hello_json_handler( axum::extract::Path(name): axum::extract::Path<String>, State(config): State<Arc<AppConfig>> ) -> Json<GreetingResponse> { let greeting = config.get_string("greeting.message") .unwrap_or_else(|_| "Hello".to_string()); let now = chrono::Local::now().to_rfc3339(); Json(GreetingResponse { message: format!("{}, {}!", greeting, name), timestamp: now, }) } // ... existing test code }
Step 2: Test the JSON Endpoint
curl http://localhost:3000/hello-json/Developer
You should receive a JSON response:
{
"message": "Hello, Developer!",
"timestamp": "2025-03-27T12:34:56.789-07:00"
}
7. Understanding Dependency Injection
Navius uses dependency injection to make services available to your handlers. Let's create a simple service and inject it.
Step 1: Create a Greeting Service
Create a new file src/app/hello/service.rs
:
#![allow(unused)] fn main() { /// Service for generating greetings pub struct GreetingService { default_greeting: String, } impl GreetingService { /// Create a new GreetingService pub fn new(default_greeting: String) -> Self { Self { default_greeting } } /// Generate a greeting for the given name pub fn greet(&self, name: &str) -> String { format!("{}, {}!", self.default_greeting, name) } /// Get a formal greeting pub fn formal_greeting(&self, name: &str) -> String { format!("Greetings, {} - welcome to Navius!", name) } } }
Step 2: Update Your Module to Use the Service
Modify src/app/hello/mod.rs
:
#![allow(unused)] fn main() { mod service; use service::GreetingService; use axum::{routing::get, Router, extract::State, Json}; use serde::Serialize; use std::sync::Arc; /// Create a router with the greeting service pub fn routes() -> Router { let greeting_service = Arc::new(GreetingService::new("Hello".to_string())); Router::new() .route("/hello-service/:name", get(hello_service_handler)) .layer(axum::extract::Extension(greeting_service)) // ... other routes } /// Handler that uses the greeting service async fn hello_service_handler( axum::extract::Path(name): axum::extract::Path<String>, axum::extract::Extension(service): axum::extract::Extension<Arc<GreetingService>> ) -> String { service.greet(&name) } // ... other handlers and tests }
Key Concepts
The Navius Way
Navius encourages certain patterns and practices:
- Modularity - Organize code by feature in dedicated modules
- Separation of Concerns - Keep routing, handlers, and business logic separate
- Configuration-Driven - Use configuration files to control behavior
- Test-First Development - Write tests alongside your code
- Dependency Injection - Use DI for loose coupling and testability
Common Components
These are components you'll work with frequently:
Component | Purpose | Location |
---|---|---|
Router | Define HTTP routes | src/app/router.rs |
Handlers | Process HTTP requests | Feature modules |
Services | Implement business logic | Feature modules |
Models | Define data structures | src/app/models/ |
Config | Application configuration | config/ directory |
Troubleshooting
Issue | Solution |
---|---|
Compilation errors | Check for typos and imports; make sure dependencies are declared |
Route not found | Verify route is registered in the router with exact path |
Configuration not loading | Check YAML syntax and path used in config.get_* calls |
Test failures | Check test expectations match actual implementation |
Next Steps
Now that you've created your first Navius application, here are some next steps to explore:
- Build a Complete API - Expand your application with CRUD operations
- Add Database Integration - Connect to PostgreSQL using the database features
- Implement Authentication - Add authentication to secure your API endpoints
- Explore Middleware - Add request logging, error handling, and other middleware
- Write More Tests - Expand your test coverage with integration tests
Related Documents
- Hello World Tutorial - A more focused tutorial on building a simple application
- Development Setup - Setting up your development environment
- Development Workflow - Understanding the development process
- REST API Example - Building a complete REST API
- Testing Guide - Writing comprehensive tests
title: Navius Hello World Tutorial description: A step-by-step guide to create your first Navius application category: getting-started tags:
- tutorial
- beginner
- example
- rest-api
- getting-started related:
- installation.md
- first-steps.md
- development-setup.md
- ../02_examples/rest-api-example.md
- ../04_guides/dependency-injection.md last_updated: March 28, 2025 version: 1.0 status: active
Navius Hello World Tutorial
Overview
This tutorial walks you through creating a simple "Hello World" REST API using the Navius framework. By the end, you'll have a functional application that demonstrates key Navius concepts including dependency injection, routing, and service architecture.
Prerequisites
Before beginning this tutorial, ensure you have:
- Rust installed (1.70.0 or newer)
- Cargo installed
- Completed the Installation Guide
- Optional: Completed the Development Setup
Installation
To install the components needed for this guide:
# Create a new project
cargo new hello-navius
cd hello-navius
# Add dependencies to Cargo.toml
# (See Step 2 in the tutorial below for details)
For the complete installation process, refer to the Installation Guide.
Configuration
Configure the application with the following settings:
# Cargo.toml configuration
[package]
name = "hello-navius"
version = "0.1.0"
edition = "2021"
[dependencies]
navius = "0.1.0"
tokio = { version = "1", features = ["full"] }
axum = "0.6.0"
serde = { version = "1.0", features = ["derive"] }
Key configuration options:
navius
: The core Navius frameworktokio
: Asynchronous runtime for Rustaxum
: Web framework for building APIsserde
: Serialization/deserialization library
For more detailed configuration information, see the Configuration Guide.
Quick Start
If you're familiar with Rust and just want the code, here's a quick overview of what we'll build:
# Create a new project
cargo new hello-navius
cd hello-navius
# Add dependencies to Cargo.toml
# [dependencies]
# navius = "0.1.0"
# tokio = { version = "1", features = ["full"] }
# axum = "0.6.0"
# serde = { version = "1.0", features = ["derive"] }
# Run the application
cargo run
# Test the API
curl http://localhost:3000/hello/World
# Result: {"message":"Hello, World"}
Step-by-step Tutorial
Step 1: Create a New Project
First, create a new Rust project using Cargo:
cargo new hello-navius
cd hello-navius
Step 2: Add Dependencies
Edit your Cargo.toml
file to add the necessary dependencies:
[package]
name = "hello-navius"
version = "0.1.0"
edition = "2021"
[dependencies]
navius = "0.1.0"
tokio = { version = "1", features = ["full"] }
axum = "0.6.0"
serde = { version = "1.0", features = ["derive"] }
These dependencies include:
navius
: The core Navius frameworktokio
: Asynchronous runtime for Rustaxum
: Web framework for building APIsserde
: Serialization/deserialization library
Step 3: Set Up the Main Service
Create a new file src/hello_service.rs
that implements our core service:
#![allow(unused)] fn main() { // src/hello_service.rs use std::sync::Arc; pub struct HelloService { greeting: String, } impl HelloService { pub fn new(greeting: String) -> Arc<Self> { Arc::new(Self { greeting }) } pub fn greet(&self, name: &str) -> String { format!("{} {}", self.greeting, name) } } }
This service:
- Uses an
Arc
(Atomic Reference Counting) to enable safe sharing across threads - Stores a greeting message that can be customized
- Provides a method to generate personalized greetings
Step 4: Implement a REST API Handler
Create a handler in src/hello_handler.rs
to expose the service via a REST endpoint:
#![allow(unused)] fn main() { // src/hello_handler.rs use axum::{extract::Path, response::Json}; use serde::Serialize; use std::sync::Arc; use crate::hello_service::HelloService; #[derive(Serialize)] pub struct GreetingResponse { message: String, } pub async fn greet_handler( Path(name): Path<String>, service: Arc<HelloService>, ) -> Json<GreetingResponse> { let message = service.greet(&name); Json(GreetingResponse { message }) } }
This handler:
- Uses Axum's path extraction to get the name parameter
- Accepts our
HelloService
as a dependency via Arc - Returns a JSON response with the greeting message
- Uses Serde to serialize the response
Step 5: Set Up Application State
Create src/app_state.rs
to manage application state and dependencies:
#![allow(unused)] fn main() { // src/app_state.rs use std::sync::Arc; use crate::hello_service::HelloService; pub struct AppState { pub hello_service: Arc<HelloService>, } impl AppState { pub fn new() -> Self { let hello_service = HelloService::new("Hello,".to_string()); Self { hello_service } } } }
The AppState
:
- Acts as a container for all application services
- Initializes the
HelloService
with a default greeting - Demonstrates basic dependency management
Step 6: Configure Routing
Create src/router.rs
to set up the API routes:
#![allow(unused)] fn main() { // src/router.rs use axum::{ routing::get, Router, Extension, }; use std::sync::Arc; use crate::app_state::AppState; use crate::hello_handler::greet_handler; pub fn app_router() -> Router { let app_state = Arc::new(AppState::new()); Router::new() .route("/hello/:name", get(greet_handler)) .layer(Extension(app_state.hello_service.clone())) } }
The router:
- Creates an instance of AppState
- Defines a GET route that accepts a name parameter
- Uses Axum's extension system to inject our HelloService into handlers
Step 7: Configure the Main Application
Create the application entry point in src/main.rs
:
// src/main.rs mod app_state; mod hello_handler; mod hello_service; mod router; use std::net::SocketAddr; #[tokio::main] async fn main() { // Initialize router let app = router::app_router(); // Set up the address let addr = SocketAddr::from(([127, 0, 0, 1], 3000)); // Start the server println!("Server starting on http://{}", addr); axum::Server::bind(&addr) .serve(app.into_make_service()) .await .unwrap(); }
The main function:
- Imports all our modules
- Uses the Tokio runtime for async execution
- Initializes the router
- Starts an HTTP server on localhost:3000
Step 8: Run the Application
Run your application with Cargo:
cargo run
You should see output indicating the server has started:
Server starting on http://127.0.0.1:3000
Step 9: Test the API
With the server running, test the API using curl or a web browser:
curl http://localhost:3000/hello/World
You should receive a JSON response:
{"message":"Hello, World"}
Try different names to see personalized responses:
curl http://localhost:3000/hello/Navius
Response:
{"message":"Hello, Navius"}
Understanding the Code
Project Structure
Our project follows a clean separation of concerns:
hello-navius/
βββ Cargo.toml # Project dependencies
βββ src/
βββ main.rs # Application entry point
βββ app_state.rs # Application state management
βββ hello_service.rs # Business logic
βββ hello_handler.rs # API endpoints
βββ router.rs # Route configuration
Key Concepts
This simple example demonstrates several important Navius concepts:
- Service Pattern: Business logic is encapsulated in the
HelloService
- Dependency Injection: Services are created in
AppState
and injected where needed - Handler Pattern: API endpoints are defined as handler functions
- Routing: URL paths are mapped to handlers in the router
Advanced Customization
Adding Configuration
To make the greeting configurable, you could add a configuration file:
#![allow(unused)] fn main() { // src/config.rs pub struct AppConfig { pub default_greeting: String, } impl Default for AppConfig { fn default() -> Self { Self { default_greeting: "Hello,".to_string(), } } } }
Then update app_state.rs
to use this configuration:
#![allow(unused)] fn main() { // In app_state.rs use crate::config::AppConfig; pub struct AppState { pub hello_service: Arc<HelloService>, pub config: AppConfig, } impl AppState { pub fn new() -> Self { let config = AppConfig::default(); let hello_service = HelloService::new(config.default_greeting.clone()); Self { hello_service, config } } } }
Adding Error Handling
For more robust error handling, you could update the service:
#![allow(unused)] fn main() { // Enhanced hello_service.rs pub enum GreetingError { EmptyName, } impl HelloService { pub fn greet(&self, name: &str) -> Result<String, GreetingError> { if name.trim().is_empty() { return Err(GreetingError::EmptyName); } Ok(format!("{} {}", self.greeting, name)) } } }
Troubleshooting
Issue | Solution |
---|---|
Compilation Errors | Ensure you have the correct versions of dependencies |
Server Won't Start | Check if port 3000 is already in use |
404 Not Found | Verify the URL path matches the route defined in router.rs |
Empty Response | Check your handler logic and verify the service returns the expected data |
Next Steps
This simple example demonstrates the basic structure of a Navius application. From here, you can:
- Add more advanced routing patterns
- Implement proper error handling
- Explore database integration
- Add authentication
- Set up comprehensive testing
For more sophisticated examples, check out the Examples section, particularly:
- REST API Example for a more complete API
- Dependency Injection Example for advanced DI patterns
title: Navius Development Environment Setup description: Comprehensive guide for setting up a complete Navius development environment category: getting-started tags:
- development
- setup
- tools
- ide
- configuration related:
- installation.md
- first-steps.md
- hello-world.md
- ../03_contributing/coding-standards.md last_updated: March 27, 2025 version: 1.1 status: active
Navius Development Environment Setup
Overview
This guide walks you through setting up a comprehensive development environment for working with the Navius framework. It covers IDE configuration, tools, extensions, and best practices that will enhance your development experience and productivity.
Prerequisites
Before setting up your development environment, ensure you have:
- Completed the Installation Guide for basic Navius setup
- Basic familiarity with command-line tools and Git
- Admin/sudo rights on your development machine
- Rust toolchain installed (1.70.0 or later)
Quick Start
For experienced developers who want to get started quickly:
# Clone the repository if you haven't already
git clone https://github.com/your-organization/navius.git
cd navius
# Install recommended development tools
cargo install cargo-edit cargo-watch cargo-expand cargo-tarpaulin
# Set up VSCode with extensions (if using VSCode)
code --install-extension rust-lang.rust-analyzer
code --install-extension tamasfe.even-better-toml
code --install-extension serayuzgur.crates
code --install-extension vadimcn.vscode-lldb
code --install-extension matklad.rust-analyzer
code --install-extension bungcip.better-toml
Step-by-step Setup
1. IDE Installation and Configuration
We recommend using Visual Studio Code or JetBrains CLion for Navius development.
Visual Studio Code
-
Install Visual Studio Code
- Download and install from code.visualstudio.com
-
Install Essential Extensions
code --install-extension rust-lang.rust-analyzer code --install-extension tamasfe.even-better-toml code --install-extension serayuzgur.crates code --install-extension vadimcn.vscode-lldb code --install-extension matklad.rust-analyzer code --install-extension bungcip.better-toml
-
Configure VS Code Settings
Create or update
.vscode/settings.json
in the project directory:{ "rust-analyzer.checkOnSave.command": "clippy", "rust-analyzer.checkOnSave.allTargets": true, "editor.formatOnSave": true, "rust-analyzer.cargo.allFeatures": true, "rust-analyzer.procMacro.enable": true, "[rust]": { "editor.defaultFormatter": "rust-lang.rust-analyzer" } }
-
Configure VS Code Launch Configuration
Create or update
.vscode/launch.json
for debugging:{ "version": "0.2.0", "configurations": [ { "type": "lldb", "request": "launch", "name": "Debug Navius", "cargo": { "args": ["build", "--bin", "navius"], "filter": { "name": "navius", "kind": "bin" } }, "args": [], "cwd": "${workspaceFolder}", "env": { "RUST_LOG": "debug" } } ] }
JetBrains CLion
-
Install CLion
- Download and install from jetbrains.com/clion
-
Install Rust Plugin
- Go to Settings/Preferences β Plugins
- Search for "Rust" and install the official plugin
- Restart CLion
-
Configure Toolchain
- Go to Settings/Preferences β Languages & Frameworks β Rust
- Set the toolchain location to your Rust installation
- Enable "Run external linter to analyze code on the fly"
-
Configure Run Configurations
- Go to Run β Edit Configurations
- Add a new Cargo configuration
- Set the command to "run" and add any necessary environment variables
2. Git Configuration
-
Configure Git Identity
git config --global user.name "Your Name" git config --global user.email "[email protected]"
-
Set Up Git Hooks
cd navius cp .git/hooks/pre-commit.sample .git/hooks/pre-commit chmod +x .git/hooks/pre-commit
-
Configure Git Hooks
Edit
.git/hooks/pre-commit
to include:#!/bin/sh # Run clippy before commit cargo clippy -- -D warnings if [ $? -ne 0 ]; then echo "Clippy failed, commit aborted" exit 1 fi # Run tests before commit cargo test if [ $? -ne 0 ]; then echo "Tests failed, commit aborted" exit 1 fi
-
Configure Git Aliases (Optional)
git config --global alias.st status git config --global alias.co checkout git config --global alias.br branch git config --global alias.cm "commit -m"
3. Command-line Tools
-
Install Cargo Extensions
cargo install cargo-edit # For dependency management cargo install cargo-watch # For auto-reloading during development cargo install cargo-expand # For macro debugging cargo install cargo-tarpaulin # For code coverage cargo install cargo-outdated # For checking outdated dependencies cargo install cargo-bloat # For analyzing binary size
-
Install Database Tools
# For PostgreSQL pip install pgcli # Better PostgreSQL CLI # For Redis (if using Redis) brew install redis-cli # macOS with Homebrew # or sudo apt install redis-tools # Ubuntu
-
Install API Testing Tools
# Install httpie for API testing pip install httpie # Or install Postman # Download from https://www.postman.com/downloads/
-
Install Documentation Tools
# Install mdbook for documentation previewing cargo install mdbook # Install additional mdbook components cargo install mdbook-mermaid # For diagrams cargo install mdbook-linkcheck # For validating links
4. Environment Configuration
-
Create Development Environment Files
cp .env.example .env.development
Edit
.env.development
with your local settings:# Environment selection RUN_ENV=development # Logging RUST_LOG=debug # Database configuration DATABASE_URL=postgres://postgres:postgres@localhost:5432/navius # Secrets (development only) JWT_SECRET=dev_secret_key_do_not_use_in_production # Other settings ENABLE_SWAGGER=true
-
Configure Shell Aliases
Add to your
~/.bashrc
or~/.zshrc
:# Navius development aliases alias ns="cd /path/to/navius && ./run_dev.sh" alias nt="cd /path/to/navius && cargo test" alias nc="cd /path/to/navius && cargo clippy" alias nw="cd /path/to/navius && cargo watch -x run" alias ndoc="cd /path/to/navius && cd docs && mdbook serve"
5. Docker Setup (Optional)
-
Install Docker and Docker Compose
- Download and install from docker.com
-
Verify Installation
docker --version docker-compose --version
-
Set Up Development Containers
cd navius/test/resources/docker docker-compose -f docker-compose.dev.yml up -d
-
Configure Docker Integration with IDE
- In VS Code, install the Docker extension
- In CLion, configure Docker integration in settings
Verification
To verify your development environment:
-
Run the Linter
cargo clippy
-
Run Tests
cargo test
-
Start the Development Server
./run_dev.sh --watch
-
Access the Application
Open a browser and navigate to http://localhost:3000/actuator/health
-
Check API Documentation
Navigate to http://localhost:3000/docs to view the Swagger UI.
Troubleshooting
Common Issues
Issue | Solution |
---|---|
Rust Analyzer Not Working | Ensure the rust-analyzer extension is properly installed and VS Code has been restarted |
Build Errors | Run cargo clean followed by cargo build to rebuild from scratch |
Git Hooks Not Running | Check permissions with ls -la .git/hooks/ and ensure hooks are executable |
Database Connection Errors | Ensure PostgreSQL is running and the connection string is correct |
Missing Dependencies | Run rustup update and cargo update to update Rust and dependencies |
Hot Reload Not Working | Check cargo-watch installation and ensure watchexec is working |
IDE-Specific Issues
- VS Code: If intellisense is not working, try "Restart Rust Analyzer" from the command palette
- CLion: If cargo features aren't recognized, invalidate caches via File β Invalidate Caches and Restart
Environment-Specific Solutions
macOS
- If you encounter OpenSSL issues, install it via Homebrew:
brew install openssl
- For PostgreSQL installation:
brew install postgresql
Linux
- On Ubuntu, install build essentials:
sudo apt install build-essential
- For debugging tools:
sudo apt install lldb
Windows
- Use Windows Subsystem for Linux (WSL2) for the best experience
- Install the C++ build tools with
rustup component add rust-src
Development Workflow Tips
-
Use Feature Branches
- Create a new branch for each feature:
git checkout -b feature/my-feature
- Keep branches focused on a single task
- Create a new branch for each feature:
-
Run Tests Frequently
- Use
cargo test
or the aliasnt
before committing - Consider setting up continuous integration
- Use
-
Format Code Automatically
- Use
cargo fmt
or enable formatting on save in your IDE - Run
cargo clippy
to catch common mistakes
- Use
-
Review Documentation
- Update docs when changing functionality
- Preview documentation changes with mdbook
Next Steps
After setting up your development environment, we recommend exploring:
- IDE Setup Guide - Configure your editor for optimal Navius development
- Git Workflow Guide - Learn version control best practices for Navius
- Testing Guide - Understand how to test Navius applications
- Debugging Guide - Learn techniques for troubleshooting issues
For a quick start project, see our Hello World Tutorial.
Related Documents
- Installation Guide - Prerequisites for this guide
- First Steps - Next steps after setting up your environment
- Coding Standards - Guidelines for code contributions
- Development Workflow - Understanding the development process
title: Navius CLI Reference description: Comprehensive reference for the Navius command-line interface category: getting-started tags:
- cli
- reference
- commands
- development
- tooling related:
- installation.md
- development-setup.md
- first-steps.md last_updated: March 27, 2025 version: 1.0 status: active
Navius CLI Reference
Overview
This document provides a comprehensive reference for the Navius command-line interface (CLI), including all available commands, options, environment variables, and exit codes.
Prerequisites
- Rust and Cargo installed
- Navius project cloned and dependencies installed (see Installation Guide)
Quick Start
Most commonly used commands:
# Run the application with default settings
cargo run
# Run in release mode
cargo run --release
# Run with specific features
cargo run --features "feature1,feature2"
# Run tests
cargo test
Basic Commands
Run Application
cargo run
Starts the application with default configuration.
Run with Specific Feature Flags
cargo run --features "feature1,feature2"
Runs the application with specific features enabled.
Development Mode
cargo run --features "dev"
Runs the application in development mode, which enables additional logging and development tooling.
Configuration Commands
Using Custom Configuration
cargo run -- --config-path=/path/to/config.yaml
Starts the application with a custom configuration file.
Environment Override
ENV_VAR=value cargo run
Runs the application with environment variable overrides.
Testing Commands
Run All Tests
cargo test
Runs all tests in the application.
Run Specific Tests
cargo test test_name
Runs tests matching the specified name.
Run Tests with Coverage
cargo tarpaulin --out Html
Runs tests and generates a coverage report.
Build Commands
Debug Build
cargo build
Builds the application in debug mode.
Release Build
cargo build --release
Builds the application in release mode, with optimizations enabled.
Build Documentation
cargo doc --no-deps --open
Builds and opens the API documentation.
Linting and Formatting
Check Code Style
cargo clippy
Checks the code for style issues and common mistakes.
Format Code
cargo fmt
Formats the code according to the Rust style guidelines.
Database Commands
Run Migrations
cargo run --bin migrate
Runs database migrations to set up or update the database schema.
Reset Database
cargo run --bin reset-db
Resets the database (warning: this will delete all data).
Advanced Commands
Generate Offline SQLx Data
cargo sqlx prepare
Generates SQLx data for offline compilation.
Analyze Binary Size
cargo bloat --release
Analyzes the binary size to identify large dependencies.
Key Concepts
Environment Variables
Variable | Description | Default |
---|---|---|
RUST_LOG | Controls log level | info |
CONFIG_PATH | Path to configuration file | config/default.yaml |
DATABASE_URL | Database connection string | Configured in YAML |
PORT | Server port | 3000 |
HOST | Server host | 127.0.0.1 |
Exit Codes
Code | Meaning |
---|---|
0 | Success |
1 | General error |
2 | Configuration error |
3 | Database connection error |
4 | Permission error |
Script Files
Navius provides several convenience scripts in the project root:
Script | Purpose |
---|---|
run_dev.sh | Runs the application in development mode with hot reloading |
run_prod.sh | Runs the application in production mode |
test.sh | Runs tests with common options |
reset_db.sh | Resets the database and reruns migrations |
Troubleshooting
Issue | Solution |
---|---|
Command not found | Make sure you're in the project root directory |
Permission denied | Check file permissions or use sudo if appropriate |
Dependency errors | Run cargo update to update dependencies |
Test failures | Check error messages and review related code |
Next Steps
- Review the Installation Guide for setup instructions
- See Development Setup for creating a development environment
- Learn about basic concepts in First Steps
title: "README" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: "Navius Examples" description: "Usage examples for the Navius Framework" category: examples tags:
- examples
- usage
- patterns last_updated: March 27, 2025 version: 1.0
Navius Examples
This directory contains examples demonstrating how to use the Navius framework for building robust, maintainable applications with Rust.
Available Examples
Example | Description |
---|---|
Basic Application Example | A minimalist Navius application demonstrating core concepts |
Custom Service Example | How to create and register custom services |
Database Service Example | Using the database service for data persistence |
Cache Provider Example | Working with the cache provider for improved performance |
Two-Tier Cache Example | Implementing multi-level caching strategies |
Health Service Example | Implementing application health checks |
Configuration Example | Working with configuration in Navius applications |
Error Handling Example | Best practices for error handling |
Dependency Injection Example | Using dependency injection in Navius |
REST API Example | Building REST APIs with Navius |
GraphQL Example | Building GraphQL APIs with Navius |
Running the Examples
Each example can be run from its own directory. To run an example:
- Clone the Navius repository
- Navigate to the example directory
- Follow the specific instructions in the example's README.md file
For most examples, you can run them with:
cargo run
Some examples may require additional setup, such as a database connection or environment variables.
Getting Help
If you have questions about these examples, please:
- Check the Navius documentation
- Open an issue on the Navius GitHub repository
- Join the Navius Discord community
title: "Spring Boot Comparison" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: Spring Boot vs Navius Framework Comparison description: Comprehensive comparison between Spring Boot (Java) and Navius (Rust) frameworks to help Java developers transition to Rust category: examples tags:
- comparison
- spring-boot
- java
- rust
- migration
- web-framework related:
- 02_examples/rest-api-example.md
- 02_examples/dependency-injection-example.md
- 01_getting_started/first-steps.md last_updated: March 27, 2025 version: 1.0 status: stable
Spring Boot vs Navius Developer Experience
This document illustrates the similarities between Spring Boot and Navius frameworks, showing how Java Spring Boot developers can easily transition to Rust using Navius.
Overview
Navius was designed with Spring Boot developers in mind, providing a familiar programming model while leveraging Rust's performance and safety benefits. This guide highlights the parallel patterns between the two frameworks.
Quick Navigation
- Application Bootstrap
- Module Organization
- Simple Health Endpoint
- REST Controller
- Service Layer
- Dependency Injection
- Configuration
- Database Access
- Testing
- Common Design Patterns
Application Bootstrap
Spring Boot (Java)
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Navius (Rust)
fn main() {
NaviusApp::new()
.with_default_config()
.with_actuator()
.with_swagger()
.run();
}
Key Similarities
- Both frameworks provide a single entry point for application bootstrap
- Fluent API for configuration
- Convention over configuration approach
- Built-in support for common production-ready features
Module Organization
Spring Boot (Java)
Java Spring Boot follows a package-based organization where components are organized by feature or layer:
com.example.demo/
βββ DemoApplication.java
βββ config/
β βββ SecurityConfig.java
βββ controller/
β βββ UserController.java
βββ service/
β βββ UserService.java
βββ repository/
β βββ UserRepository.java
βββ model/
βββ User.java
Navius (Rust)
Navius uses a flat module structure with centralized declarations in lib.rs:
// In lib.rs
mod core {
pub mod router {
pub mod core_router;
pub mod core_app_router;
pub use core_router::*;
pub use core_app_router::*;
}
pub mod models { /* ... */ }
pub mod handlers { /* ... */ }
}
mod app {
pub mod api { /* ... */ }
pub mod services { /* ... */ }
}
// Directory structure
src/
βββ lib.rs
βββ main.rs
βββ core/
β βββ router/
β β βββ core_router.rs
β β βββ core_app_router.rs
β βββ models/
β βββ core_response.rs
βββ app/
βββ api/
βββ examples.rs
This approach eliminates the need for mod.rs files in each directory, reducing file clutter and making the module structure more immediately apparent in a single location.
Key Similarities
- Logical separation of concerns (controllers, services, repositories)
- Clear distinction between framework components and application code
- Support for modular architecture
- Ability to organize by feature or by layer
Simple Health Endpoint
Spring Boot (Java)
@RestController
public class SimpleHealthController {
@GetMapping("/health")
public ResponseEntity<Map<String, String>> health() {
Map<String, String> response = new HashMap<>();
response.put("status", "UP");
return ResponseEntity.ok(response);
}
}
Navius (Rust)
// In src/app/router.rs - User's custom router implementation
use navius::core::core_router::{Router, get};
use navius::core::core_response::IntoResponse;
use axum::Json;
use serde_json::json;
// Define your custom router configuration
pub fn configure_routes(router: &mut Router) {
router.route("/health", get(health_handler));
}
// Your custom health endpoint implementation
async fn health_handler() -> impl IntoResponse {
Json(json!({ "status": "UP" }))
}
// Register your routes in main.rs
fn main() {
NaviusApp::new()
.with_default_config()
.with_routes(configure_routes)
.run();
}
Extending the Health Endpoint in Navius
// In src/app/health.rs - User's custom health implementation
use navius::core::core_router::{Router, get};
use navius::core::core_response::IntoResponse;
use axum::Json;
use serde_json::json;
// Custom health implementation with more details
pub async fn custom_health_handler() -> impl IntoResponse {
// Custom checks you might want to add
let db_status = check_database().await;
Json(json!({
"status": db_status ? "UP" : "DOWN",
"timestamp": chrono::Utc::now().to_rfc3339(),
"details": {
"database": db_status,
"version": env!("CARGO_PKG_VERSION")
}
}))
}
// Register in your router (src/app/router.rs)
pub fn configure_routes(router: &mut Router) {
router.route("/health", get(custom_health_handler));
}
Key Similarities
- Similar endpoint declaration syntax
- JSON response generation pattern
- Endpoint registration mechanism
- Support for custom health information
- Built-in health check system
REST Controller
Spring Boot (Java)
@RestController
@RequestMapping("/api/users")
public class UserController {
@Autowired
private UserService userService;
@GetMapping
public List<User> getAllUsers() {
return userService.findAll();
}
@GetMapping("/{id}")
public ResponseEntity<User> getUserById(@PathVariable UUID id) {
return userService.findById(id)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
}
@PostMapping
public ResponseEntity<User> createUser(@RequestBody @Valid UserRequest request) {
User user = userService.create(request);
return ResponseEntity
.created(URI.create("/api/users/" + user.getId()))
.body(user);
}
}
Navius (Rust)
// In src/app/controllers/user_controller.rs
use navius::core::core_macros::{api_controller, api_routes, request_mapping, get, post};
use navius::core::core_error::AppError;
use navius::app::services::UserService;
use axum::{Json, extract::Path};
use uuid::Uuid;
use std::sync::Arc;
#[api_controller]
#[request_mapping("/api/users")]
pub struct UserController {
service: Arc<dyn UserService>,
}
#[api_routes]
impl UserController {
#[get("")]
async fn get_all_users(&self) -> Result<Json<Vec<User>>, AppError> {
let users = self.service.find_all().await?;
Ok(Json(users))
}
#[get("/:id")]
async fn get_user_by_id(&self, Path(id): Path<Uuid>) -> Result<Json<User>, AppError> {
match self.service.find_by_id(id).await? {
Some(user) => Ok(Json(user)),
None => Err(AppError::not_found("User not found"))
}
}
#[post("")]
async fn create_user(&self, Json(request): Json<UserRequest>) -> Result<(StatusCode, Json<User>), AppError> {
// Validation happens via a derive macro on UserRequest
let user = self.service.create(request).await?;
Ok((StatusCode::CREATED, Json(user)))
}
}
Key Similarities
- Controllers organized by resource
- Base path mapping for resource collections
- Similar HTTP method annotations
- Path parameter extraction
- Request body validation
- Structured error handling
- Status code management
Service Layer
Spring Boot (Java)
@Service
public class UserServiceImpl implements UserService {
@Autowired
private UserRepository userRepository;
@Override
public List<User> findAll() {
return userRepository.findAll();
}
@Override
public Optional<User> findById(UUID id) {
return userRepository.findById(id);
}
@Override
public User create(UserRequest request) {
User user = new User();
user.setName(request.getName());
user.setEmail(request.getEmail());
return userRepository.save(user);
}
}
Navius (Rust)
// In src/app/services/user_service.rs
use async_trait::async_trait;
use std::sync::Arc;
use uuid::Uuid;
use navius::core::core_error::AppError;
use crate::app::repositories::UserRepository;
use crate::app::models::{User, UserRequest};
#[async_trait]
pub trait UserService: Send + Sync {
async fn find_all(&self) -> Result<Vec<User>, AppError>;
async fn find_by_id(&self, id: Uuid) -> Result<Option<User>, AppError>;
async fn create(&self, request: UserRequest) -> Result<User, AppError>;
}
pub struct UserServiceImpl {
repository: Arc<dyn UserRepository>,
}
impl UserServiceImpl {
pub fn new(repository: Arc<dyn UserRepository>) -> Self {
Self { repository }
}
}
#[async_trait]
impl UserService for UserServiceImpl {
async fn find_all(&self) -> Result<Vec<User>, AppError> {
self.repository.find_all().await
}
async fn find_by_id(&self, id: Uuid) -> Result<Option<User>, AppError> {
self.repository.find_by_id(id).await
}
async fn create(&self, request: UserRequest) -> Result<User, AppError> {
let user = User {
id: Uuid::new_v4(),
name: request.name,
email: request.email,
created_at: chrono::Utc::now(),
};
self.repository.save(user).await
}
}
Key Similarities
- Services implement interfaces/traits for testability
- Clear method contracts
- Separation of business logic from persistence
- Similar CRUD operation patterns
- Error propagation patterns
Dependency Injection
Spring Boot (Java)
// Component definition
@Service
public class EmailService {
public void sendEmail(String to, String subject, String body) {
// Implementation
}
}
// Component usage
@Service
public class NotificationService {
private final EmailService emailService;
@Autowired
public NotificationService(EmailService emailService) {
this.emailService = emailService;
}
public void notifyUser(User user, String message) {
emailService.sendEmail(user.getEmail(), "Notification", message);
}
}
// Multiple implementations with qualifier
@Service("simpleEmail")
public class SimpleEmailService implements EmailService { /*...*/ }
@Service("advancedEmail")
public class AdvancedEmailService implements EmailService { /*...*/ }
// Usage with qualifier
@Service
public class NotificationService {
private final EmailService emailService;
@Autowired
public NotificationService(@Qualifier("advancedEmail") EmailService emailService) {
this.emailService = emailService;
}
// ...
}
Navius (Rust)
// Service registry setup
use navius::core::core_registry::{ServiceRegistry, ServiceProvider};
use std::sync::Arc;
// Setting up the dependencies
fn configure_services(registry: &mut ServiceRegistry) {
// Register the repository
let repository = Arc::new(UserRepositoryImpl::new());
registry.register::<dyn UserRepository>(repository);
// Register the service, with dependency on repository
let repository = registry.resolve::<dyn UserRepository>().unwrap();
let service = Arc::new(UserServiceImpl::new(repository));
registry.register::<dyn UserService>(service);
}
// In main.rs
fn main() {
NaviusApp::new()
.with_default_config()
.with_services(configure_services)
.run();
}
// Usage in controllers
#[api_controller]
#[request_mapping("/api/users")]
pub struct UserController {
// Automatically injected by Navius
service: Arc<dyn UserService>,
}
// Multiple implementations with named registrations
fn configure_services(registry: &mut ServiceRegistry) {
// Register different implementations
let simple_email = Arc::new(SimpleEmailService::new());
let advanced_email = Arc::new(AdvancedEmailService::new());
registry.register_named::<dyn EmailService>("simple", simple_email);
registry.register_named::<dyn EmailService>("advanced", advanced_email);
// Resolve named service
let email_service = registry.resolve_named::<dyn EmailService>("advanced").unwrap();
let notification_service = Arc::new(NotificationServiceImpl::new(email_service));
registry.register::<dyn NotificationService>(notification_service);
}
Key Similarities
- Container-managed dependency injection
- Constructor-based injection
- Support for interface/trait-based programming
- Multiple implementations with qualifiers/named registration
- Singleton lifecycle by default
- Ability to resolve dependencies from the container
Configuration
Spring Boot (Java)
// application.properties or application.yml
server.port=8080
app.name=MyApplication
app.feature.enabled=true
database.url=jdbc:postgresql://localhost:5432/mydb
database.username=user
database.password=pass
// Configuration class
@Configuration
@ConfigurationProperties(prefix = "database")
public class DatabaseConfig {
private String url;
private String username;
private String password;
// Getters and setters
@Bean
public DataSource dataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(url);
config.setUsername(username);
config.setPassword(password);
return new HikariDataSource(config);
}
}
// Using environment-specific configurations
// application-dev.properties, application-prod.properties
// Activated with: spring.profiles.active=dev
Navius (Rust)
// config/default.yaml
server:
port: 8080
app:
name: MyApplication
feature:
enabled: true
database:
url: postgres://localhost:5432/mydb
username: user
password: pass
// Configuration struct
#[derive(Debug, Deserialize)]
pub struct AppConfig {
pub server: ServerConfig,
pub app: ApplicationConfig,
pub database: DatabaseConfig,
}
#[derive(Debug, Deserialize)]
pub struct DatabaseConfig {
pub url: String,
pub username: String,
pub password: String,
}
// Loading configuration
impl AppConfig {
pub fn load() -> Result<Self, ConfigError> {
let builder = Config::builder()
.add_source(File::with_name("config/default"))
.add_source(File::with_name(&format!("config/{}", env::var("NAVIUS_ENV").unwrap_or_else(|_| "development".into()))).required(false))
.add_source(Environment::with_prefix("NAVIUS").separator("__"))
.build()?;
builder.try_deserialize()
}
}
// Using configuration
fn main() {
let config = AppConfig::load().expect("Failed to load configuration");
NaviusApp::new()
.with_config(config)
.run();
}
Key Similarities
- External configuration files
- Environment-specific configurations
- Type-safe configuration objects
- Hierarchical configuration structure
- Environment variable overrides
- Default values for missing properties
Database Access
Spring Boot (Java)
// Entity definition
@Entity
@Table(name = "users")
public class User {
@Id
@GeneratedValue
private UUID id;
@Column(nullable = false)
private String name;
@Column(nullable = false, unique = true)
private String email;
@Column(name = "created_at")
private LocalDateTime createdAt;
// Getters and setters
}
// Repository definition
@Repository
public interface UserRepository extends JpaRepository<User, UUID> {
List<User> findByNameContaining(String namePart);
@Query("SELECT u FROM User u WHERE u.email = :email")
Optional<User> findByEmail(@Param("email") String email);
}
// Usage
@Service
public class UserService {
private final UserRepository repository;
@Autowired
public UserService(UserRepository repository) {
this.repository = repository;
}
public List<User> findByName(String name) {
return repository.findByNameContaining(name);
}
}
Navius (Rust)
// Entity definition
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct User {
pub id: Uuid,
pub name: String,
pub email: String,
#[sqlx(rename = "created_at")]
pub created_at: DateTime<Utc>,
}
// Repository definition
#[async_trait]
pub trait UserRepository: Send + Sync {
async fn find_all(&self) -> Result<Vec<User>, AppError>;
async fn find_by_id(&self, id: Uuid) -> Result<Option<User>, AppError>;
async fn find_by_name(&self, name: &str) -> Result<Vec<User>, AppError>;
async fn find_by_email(&self, email: &str) -> Result<Option<User>, AppError>;
async fn save(&self, user: User) -> Result<User, AppError>;
}
// Implementation
pub struct UserRepositoryImpl {
pool: Arc<PgPool>,
}
impl UserRepositoryImpl {
pub fn new(pool: Arc<PgPool>) -> Self {
Self { pool }
}
}
#[async_trait]
impl UserRepository for UserRepositoryImpl {
async fn find_all(&self) -> Result<Vec<User>, AppError> {
let users = sqlx::query_as::<_, User>("SELECT * FROM users")
.fetch_all(&*self.pool)
.await
.map_err(|e| AppError::database_error(e))?;
Ok(users)
}
async fn find_by_name(&self, name: &str) -> Result<Vec<User>, AppError> {
let users = sqlx::query_as::<_, User>("SELECT * FROM users WHERE name LIKE $1")
.bind(format!("%{}%", name))
.fetch_all(&*self.pool)
.await
.map_err(|e| AppError::database_error(e))?;
Ok(users)
}
async fn find_by_email(&self, email: &str) -> Result<Option<User>, AppError> {
let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE email = $1")
.bind(email)
.fetch_optional(&*self.pool)
.await
.map_err(|e| AppError::database_error(e))?;
Ok(user)
}
// Other methods...
}
Key Similarities
- Entity-based database modeling
- Repository pattern for data access
- Support for complex queries
- Connection pooling
- Parameter binding for SQL safety
- Type-safe result mapping
Testing
Spring Boot (Java)
// Service unit test
@RunWith(MockitoJUnitRunner.class)
public class UserServiceTest {
@Mock
private UserRepository userRepository;
@InjectMocks
private UserServiceImpl userService;
@Test
public void findById_shouldReturnUser_whenUserExists() {
// Arrange
UUID id = UUID.randomUUID();
User user = new User();
user.setId(id);
user.setName("Test User");
when(userRepository.findById(id)).thenReturn(Optional.of(user));
// Act
Optional<User> result = userService.findById(id);
// Assert
assertTrue(result.isPresent());
assertEquals("Test User", result.get().getName());
verify(userRepository).findById(id);
}
}
// Controller integration test
@RunWith(SpringRunner.class)
@WebMvcTest(UserController.class)
public class UserControllerTest {
@Autowired
private MockMvc mockMvc;
@MockBean
private UserService userService;
@Test
public void getUserById_shouldReturnUser_whenUserExists() throws Exception {
// Arrange
UUID id = UUID.randomUUID();
User user = new User();
user.setId(id);
user.setName("Test User");
when(userService.findById(id)).thenReturn(Optional.of(user));
// Act & Assert
mockMvc.perform(get("/api/users/{id}", id))
.andExpect(status().isOk())
.andExpect(jsonPath("$.id").value(id.toString()))
.andExpect(jsonPath("$.name").value("Test User"));
}
}
Navius (Rust)
// Service unit test
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::*;
use mockall::mock;
mock! {
UserRepository {}
#[async_trait]
impl UserRepository for UserRepository {
async fn find_all(&self) -> Result<Vec<User>, AppError>;
async fn find_by_id(&self, id: Uuid) -> Result<Option<User>, AppError>;
async fn save(&self, user: User) -> Result<User, AppError>;
}
}
#[tokio::test]
async fn find_by_id_should_return_user_when_user_exists() {
// Arrange
let id = Uuid::new_v4();
let user = User {
id,
name: "Test User".to_string(),
email: "[email protected]".to_string(),
created_at: chrono::Utc::now(),
};
let mut repository = MockUserRepository::new();
repository.expect_find_by_id()
.with(eq(id))
.returning(move |_| Ok(Some(user.clone())));
let service = UserServiceImpl::new(Arc::new(repository));
// Act
let result = service.find_by_id(id).await;
// Assert
assert!(result.is_ok());
let user_opt = result.unwrap();
assert!(user_opt.is_some());
let found_user = user_opt.unwrap();
assert_eq!(found_user.name, "Test User");
}
}
// Controller integration test
#[cfg(test)]
mod tests {
use super::*;
use navius::core::core_test::TestApp;
use axum::http::StatusCode;
#[tokio::test]
async fn get_user_by_id_should_return_user_when_user_exists() {
// Arrange
let id = Uuid::new_v4();
let user = User {
id,
name: "Test User".to_string(),
email: "[email protected]".to_string(),
created_at: chrono::Utc::now(),
};
let app = TestApp::new()
.with_mock::<dyn UserService, _>(move |mut mock| {
mock.expect_find_by_id()
.with(eq(id))
.returning(move |_| Ok(Some(user.clone())));
mock
})
.build();
// Act
let response = app.get(&format!("/api/users/{}", id)).await;
// Assert
assert_eq!(response.status(), StatusCode::OK);
let body: User = response.json().await;
assert_eq!(body.id, id);
assert_eq!(body.name, "Test User");
}
}
Key Similarities
- Unit testing with mocks
- Integration testing with test clients
- Clear Arrange-Act-Assert pattern
- Declarative test case structure
- Mock expectations and verifications
- Support for testing async code
- JSON response validation
Common Design Patterns
Both Spring Boot and Navius encourage the use of similar design patterns, making the transition between frameworks more intuitive:
Factory Pattern
Spring Boot:
@Component
public class PaymentMethodFactory {
@Autowired
private List<PaymentProcessor> processors;
public PaymentProcessor getProcessor(String type) {
return processors.stream()
.filter(p -> p.supports(type))
.findFirst()
.orElseThrow(() -> new IllegalArgumentException("No processor for type: " + type));
}
}
Navius:
pub struct PaymentMethodFactory {
processors: HashMap<String, Arc<dyn PaymentProcessor>>,
}
impl PaymentMethodFactory {
pub fn new(registry: &ServiceRegistry) -> Self {
let processors = registry.resolve_all::<dyn PaymentProcessor>();
let mut map = HashMap::new();
for processor in processors {
map.insert(processor.get_type().to_string(), processor);
}
Self { processors: map }
}
pub fn get_processor(&self, type_: &str) -> Result<Arc<dyn PaymentProcessor>, AppError> {
self.processors.get(type_)
.cloned()
.ok_or_else(|| AppError::not_found(&format!("No processor for type: {}", type_)))
}
}
Observer Pattern
Spring Boot:
// With Spring Events
@Component
public class OrderService {
@Autowired
private ApplicationEventPublisher eventPublisher;
public Order createOrder(OrderRequest request) {
Order order = // create order
// Publish event for observers
eventPublisher.publishEvent(new OrderCreatedEvent(order));
return order;
}
}
@Component
public class EmailNotifier {
@EventListener
public void onOrderCreated(OrderCreatedEvent event) {
// Send email notification
}
}
Navius:
// With Navius Event Bus
pub struct OrderService {
event_bus: Arc<EventBus>,
}
impl OrderService {
pub fn new(event_bus: Arc<EventBus>) -> Self {
Self { event_bus }
}
pub async fn create_order(&self, request: OrderRequest) -> Result<Order, AppError> {
let order = // create order
// Publish event for observers
self.event_bus.publish(OrderCreatedEvent::new(order.clone())).await?;
Ok(order)
}
}
pub struct EmailNotifier {
// ...
}
impl EventHandler<OrderCreatedEvent> for EmailNotifier {
async fn handle(&self, event: &OrderCreatedEvent) -> Result<(), AppError> {
// Send email notification
Ok(())
}
}
// Register in service configuration
fn configure_services(registry: &mut ServiceRegistry) {
let event_bus = registry.resolve::<EventBus>().unwrap();
let notifier = Arc::new(EmailNotifier::new());
event_bus.subscribe::<OrderCreatedEvent, _>(notifier);
}
Migration Tips for Spring Boot Developers
When transitioning from Spring Boot to Navius, keep these key points in mind:
-
Understand Rust Ownership: Rust's ownership model differs from Java's garbage collection. Use
Arc<T>
for shared ownership where needed. -
Trait Objects Instead of Interfaces: Use Rust traits (with
dyn
for dynamic dispatch) as you would Java interfaces. -
Async/Await vs Blocking: Navius uses async/await for concurrency, not threads like Spring Boot. Add
.await
to async function calls. -
Error Handling with Result: Replace exceptions with Rust's
Result
type for robust error handling. -
Explicit Dependencies: Navius requires explicit dependency registration, while Spring Boot has more automatic component scanning.
-
Immutable by Default: Embrace Rust's immutability by default instead of the mutable objects common in Java.
-
Testing Approaches: Both frameworks support mocking, but Rust tests use different libraries like
mockall
instead of Mockito. -
Configuration Loading: Both frameworks support structured configuration, but with different approaches to deserialization.
-
Database Access: Replace Spring Data repositories with explicit SQL in Navius using SQLx or Diesel.
-
Macros vs Annotations: Use Rust macros like
#[api_controller]
similarly to Spring's@RestController
.
Conclusion
Navius provides a familiar development experience for Spring Boot developers while leveraging Rust's performance, memory safety, and concurrency benefits. The similar architectural patterns and programming model allow for a smoother transition between the two frameworks.
By following the patterns demonstrated in this comparison guide, Spring Boot developers can quickly become productive with Navius and build high-performance, type-safe web applications with many of the same conveniences they're accustomed to.
For more detailed examples, refer to:
Troubleshooting Common Issues
"Cannot move out of borrowed content" errors
Spring Boot approach: In Java, you can freely copy and pass objects.
Navius solution: Use clone()
for objects that implement Clone
, or use references when possible.
Type parameter issues with trait objects
Spring Boot approach: Java generics erase at runtime.
Navius solution: Use dyn Trait
for trait objects and be mindful of type parameter constraints.
Async confusion
Spring Boot approach: Blocking code is common.
Navius solution: Use .await
on all async function calls and ensure proper async function signatures.
Missing type information
Spring Boot approach: Type inference is less strict.
Navius solution: Add type annotations when the compiler can't infer types.
title: "Two Tier Cache Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: "Two-Tier Cache Implementation Example" description: "Code examples demonstrating how to implement and use the two-tier caching system in Navius applications" category: examples tags:
- examples
- caching
- redis
- performance
- two-tier
- code related:
- ../guides/caching-strategies.md
- ../reference/configuration/cache-config.md
- ../reference/patterns/caching-patterns.md last_updated: March 27, 2025 version: 1.0
Two-Tier Cache Implementation Example
This example demonstrates how to implement and use the two-tier caching system in a Navius application.
Basic Implementation
use navius::app::cache::{create_two_tier_cache, create_typed_two_tier_cache};
use navius::core::services::cache_service::CacheService;
use std::time::Duration;
use serde::{Serialize, Deserialize};
// Define a type to cache
#[derive(Serialize, Deserialize, Clone, Debug)]
struct User {
id: String,
name: String,
email: String,
}
// Create a function to set up the cache
async fn setup_user_cache(cache_service: &CacheService) -> Result<(), AppError> {
// Create a two-tier cache for user data
// Fast cache (memory) TTL: 60 seconds
// Slow cache (Redis) TTL: 1 hour
let user_cache = create_typed_two_tier_cache::<User>(
"users",
cache_service,
Some(Duration::from_secs(60)),
Some(Duration::from_secs(3600)),
).await?;
// Store a user in the cache
let user = User {
id: "123".to_string(),
name: "Alice".to_string(),
email: "[email protected]".to_string(),
};
user_cache.set("user:123", &user, None).await?;
// Later, retrieve the user from the cache
if let Some(cached_user) = user_cache.get("user:123").await? {
println!("Found user: {:?}", cached_user);
}
Ok(())
}
Fallback Behavior Demonstration
This example shows how the two-tier cache handles fallback behavior:
use navius::app::cache::create_two_tier_cache;
use navius::core::services::cache_service::CacheService;
use std::time::Duration;
async fn demonstrate_fallback(cache_service: &CacheService) -> Result<(), AppError> {
// Create a two-tier cache
let cache = create_two_tier_cache(
"demo-cache",
cache_service,
Some(Duration::from_secs(30)), // Fast cache TTL
Some(Duration::from_secs(300)), // Slow cache TTL
).await?;
// Set a value in both caches
cache.set("key1", "value1".as_bytes(), None).await?;
// This will fetch from the fast cache (in-memory)
let fast_result = cache.get("key1").await?;
println!("Fast cache result: {:?}", fast_result);
// Simulate clearing the fast cache
// In a real scenario, this might happen when the app restarts
cache.fast_cache.clear().await?;
// This will now fetch from the slow cache (Redis) and promote to fast cache
let fallback_result = cache.get("key1").await?;
println!("After fallback result: {:?}", fallback_result);
// This will now fetch from the fast cache again as the value was promoted
let promoted_result = cache.get("key1").await?;
println!("After promotion result: {:?}", promoted_result);
Ok(())
}
Development Environment Setup
For development environments without Redis, you can use the memory-only two-tier cache:
use navius::app::cache::create_memory_only_two_tier_cache;
use navius::core::services::cache_service::CacheService;
use std::time::Duration;
async fn setup_dev_cache(cache_service: &CacheService) -> Result<(), AppError> {
// Create a memory-only two-tier cache
// Small fast cache TTL: 10 seconds
// Larger slow cache TTL: 60 seconds
let dev_cache = create_memory_only_two_tier_cache(
"dev-cache",
cache_service,
Some(Duration::from_secs(10)),
Some(Duration::from_secs(60)),
).await?;
// Use it like a normal cache
dev_cache.set("dev-key", "dev-value".as_bytes(), None).await?;
Ok(())
}
Integration with Service Layer
Here's how to integrate the two-tier cache with a service layer:
use navius::app::cache::create_typed_two_tier_cache;
use navius::core::services::cache_service::CacheService;
use std::time::Duration;
use std::sync::Arc;
struct UserService {
cache: Arc<Box<dyn TypedCache<User>>>,
repository: Arc<dyn UserRepository>,
}
impl UserService {
async fn new(
cache_service: &CacheService,
repository: Arc<dyn UserRepository>,
) -> Result<Self, AppError> {
let cache = Arc::new(create_typed_two_tier_cache::<User>(
"users",
cache_service,
Some(Duration::from_secs(60)),
Some(Duration::from_secs(3600)),
).await?);
Ok(Self { cache, repository })
}
async fn get_user(&self, id: &str) -> Result<Option<User>, AppError> {
let cache_key = format!("user:{}", id);
// Try to get from cache first
if let Some(user) = self.cache.get(&cache_key).await? {
return Ok(Some(user));
}
// If not in cache, get from repository
if let Some(user) = self.repository.find_by_id(id).await? {
// Store in cache for next time
self.cache.set(&cache_key, &user, None).await?;
return Ok(Some(user));
}
Ok(None)
}
}
Complete Application Example
Here's a complete example showing how to set up and use the two-tier cache in an API endpoint:
use axum::{
routing::get,
Router,
extract::{State, Path},
response::Json,
};
use navius::app::cache::create_typed_two_tier_cache;
use navius::core::services::cache_service::CacheService;
use std::sync::Arc;
use std::time::Duration;
// Application state with cache service
struct AppState {
user_service: Arc<UserService>,
}
async fn setup_app() -> Router {
// Create cache service
let cache_service = CacheService::new().await.unwrap();
// Create user repository
let user_repository = Arc::new(UserRepositoryImpl::new());
// Create user service with caching
let user_service = Arc::new(
UserService::new(&cache_service, user_repository).await.unwrap()
);
// Create application state
let app_state = Arc::new(AppState { user_service });
// Create router with API endpoints
Router::new()
.route("/users/:id", get(get_user))
.with_state(app_state)
}
// API endpoint that uses the cached service
async fn get_user(
State(state): State<Arc<AppState>>,
Path(id): Path<String>,
) -> Json<Option<User>> {
let user = state.user_service.get_user(&id).await.unwrap();
Json(user)
}
Best Practices
- Correctly size your caches: Small, frequently accessed data works best
- Set appropriate TTLs: Fast cache should have shorter TTL than slow cache
- Handle errors gracefully: The two-tier cache handles most errors internally
- Monitor performance: Track hit/miss rates to fine-tune cache settings
- Use typed caches: They provide type safety and easier code maintenance
Read More
For more details on caching strategies, see the Caching Strategies Guide.
title: "Server Customization Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: "Server Customization System Examples" description: "Practical code examples for using the Server Customization System in Navius applications" category: examples tags:
- examples
- server-customization
- features
- optimization
- code related:
- ../guides/features/server-customization-cli.md
- ../reference/configuration/feature-config.md
- ../feature-system.md last_updated: March 27, 2025 version: 1.0
Server Customization System Examples
This document provides practical examples of how to use the Server Customization System in Navius applications.
Basic Feature Configuration
use navius::core::features::{FeatureRegistry, RuntimeFeatures};
use navius::app::AppBuilder;
fn main() {
// Create a new feature registry with default features
let mut registry = FeatureRegistry::new();
// Enable specific features
registry.enable("caching").unwrap();
registry.enable("metrics").unwrap();
registry.enable("security").unwrap();
// Disable features you don't need
registry.disable("tracing").unwrap();
registry.disable("websocket").unwrap();
// Resolve dependencies (this will enable any dependencies of the enabled features)
registry.resolve_dependencies().unwrap();
// Create runtime features from the registry
let runtime_features = RuntimeFeatures::from_registry(®istry);
// Build your application with the configured features
let app = AppBuilder::new()
.with_features(runtime_features)
.build();
// Start the server
app.start().unwrap();
}
Loading Features from Configuration
use navius::core::features::{FeatureRegistry, FeatureConfig};
use navius::core::config::ConfigLoader;
fn load_features_from_config() -> FeatureRegistry {
// Load configuration
let config = ConfigLoader::new()
.with_file("config/app.yaml")
.load()
.unwrap();
// Extract features configuration
let features_config = config.get_section("features").unwrap();
// Create feature registry from configuration
let mut registry = FeatureRegistry::from_config(&features_config);
// You can still make runtime adjustments
if cfg!(debug_assertions) {
// Enable development features in debug mode
registry.enable("debug_tools").unwrap();
}
// Finalize by resolving dependencies
registry.resolve_dependencies().unwrap();
registry
}
Feature-Conditional Code Execution
use navius::core::features::RuntimeFeatures;
// Using feature check in functions
fn initialize_metrics(features: &RuntimeFeatures) {
if !features.is_enabled("metrics") {
println!("Metrics disabled, skipping initialization");
return;
}
println!("Initializing metrics subsystem...");
// Check for advanced metrics
if features.is_enabled("advanced_metrics") {
println!("Initializing advanced metrics...");
// Initialize advanced metrics collectors
}
}
// Using the convenience macro
fn setup_services(app_state: &AppState) {
// This code only runs if the "caching" feature is enabled
when_feature_enabled!(app_state, "caching", {
println!("Setting up cache service...");
let cache_service = CacheService::new().await.unwrap();
app_state.register_service("cache", cache_service);
});
// This code only runs if the "redis_caching" feature is enabled
when_feature_enabled!(app_state, "redis_caching", {
println!("Setting up Redis cache provider...");
let redis_provider = RedisProvider::new("redis://localhost:6379").await.unwrap();
app_state.register_cache_provider("redis", redis_provider);
});
}
Feature Dependency Example
use navius::core::features::{FeatureRegistry, FeatureInfo};
fn setup_feature_dependencies() -> FeatureRegistry {
let mut registry = FeatureRegistry::new();
// Define features with dependencies
let metrics = FeatureInfo::new("metrics")
.with_description("Basic metrics collection")
.with_default_enabled(true);
let advanced_metrics = FeatureInfo::new("advanced_metrics")
.with_description("Advanced metrics and custom reporters")
.with_dependency("metrics") // Depends on basic metrics
.with_default_enabled(false);
let redis_caching = FeatureInfo::new("redis_caching")
.with_description("Redis cache provider")
.with_dependency("caching") // Depends on basic caching
.with_default_enabled(true);
// Register features
registry.register(metrics).unwrap();
registry.register(advanced_metrics).unwrap();
registry.register(redis_caching).unwrap();
// When we enable advanced_metrics, it will automatically enable metrics
registry.enable("advanced_metrics").unwrap();
// Resolve all dependencies
registry.resolve_dependencies().unwrap();
// We didn't explicitly enable "metrics", but it will be enabled
// as a dependency of "advanced_metrics"
assert!(registry.is_enabled("metrics"));
registry
}
Using the Feature CLI
The Server Customization System includes a CLI tool for managing features. Here's how to use it:
# List all available features
features_cli list
# Enable a specific feature
features_cli enable caching
# Disable a feature
features_cli disable tracing
# Show current feature status
features_cli status
# Create a custom server build with selected features
features_cli build --output=my-custom-server.bin
# Save current feature configuration to a file
features_cli save my-features.json
# Load features from a configuration file
features_cli load my-features.json
Conditional Compilation with Cargo Features
You can also use Cargo's feature flags for compile-time feature selection:
# Cargo.toml
[features]
default = ["metrics", "caching", "security"]
metrics = []
advanced_metrics = ["metrics"]
tracing = []
caching = []
redis_caching = ["caching"]
security = []
Then in your code:
// This code only compiles if the "metrics" feature is enabled
#[cfg(feature = "metrics")]
pub mod metrics {
pub fn initialize() {
println!("Initializing metrics...");
}
// This code only compiles if both "metrics" and "advanced_metrics" features are enabled
#[cfg(feature = "advanced_metrics")]
pub fn initialize_advanced() {
println!("Initializing advanced metrics...");
}
}
// This function only exists if the "caching" feature is enabled
#[cfg(feature = "caching")]
pub fn setup_cache() {
println!("Setting up cache...");
// This code only compiles if both "caching" and "redis_caching" features are enabled
#[cfg(feature = "redis_caching")]
{
println!("Setting up Redis cache provider...");
}
}
Feature Configuration File Example
# features.yaml
enabled:
- core
- api
- rest
- security
- caching
- redis_caching
- metrics
disabled:
- tracing
- advanced_metrics
- websocket
- graphql
configuration:
caching:
memory_cache_enabled: true
memory_cache_size: 10000
redis_enabled: true
redis_url: "redis://localhost:6379"
security:
rate_limit_enabled: true
rate_limit_requests_per_minute: 100
Custom Feature Registration
use navius::core::features::{FeatureRegistry, FeatureInfo, FeatureCategory};
fn register_custom_features() -> FeatureRegistry {
let mut registry = FeatureRegistry::new();
// Define a custom feature category
let api_category = FeatureCategory::new("api")
.with_description("API related features");
// Register the category
registry.register_category(api_category);
// Create custom features
let custom_api = FeatureInfo::new("custom_api")
.with_description("Custom API endpoints")
.with_category("api")
.with_default_enabled(false);
let custom_auth = FeatureInfo::new("custom_auth")
.with_description("Custom authentication provider")
.with_category("auth")
.with_dependency("auth")
.with_default_enabled(false);
// Register custom features
registry.register(custom_api).unwrap();
registry.register(custom_auth).unwrap();
// Enable custom features
registry.enable("custom_api").unwrap();
// Resolve dependencies
registry.resolve_dependencies().unwrap();
registry
}
Feature Status Display
The feature system includes utilities for displaying feature status:
use navius::core::features::{FeatureRegistry, FeatureStatusPrinter};
fn display_feature_status(registry: &FeatureRegistry) {
let printer = FeatureStatusPrinter::new(registry);
// Print a summary of enabled/disabled features
printer.print_summary();
// Print detailed information about all features
printer.print_detailed();
// Print information about a specific feature
printer.print_feature("caching");
// Print dependency tree
printer.print_dependency_tree();
}
Feature Documentation Generation
use navius::core::features::{FeatureRegistry, FeatureDocGenerator};
use std::fs::File;
fn generate_feature_documentation(registry: &FeatureRegistry) {
let doc_generator = FeatureDocGenerator::new(registry);
// Generate documentation for all features
let docs = doc_generator.generate_all();
// Write to a markdown file
let mut file = File::create("features.md").unwrap();
doc_generator.write_markdown(&mut file, &docs).unwrap();
// Generate configuration examples
let examples = doc_generator.generate_configuration_examples();
// Write to a YAML file
let mut example_file = File::create("feature-examples.yaml").unwrap();
doc_generator.write_yaml_examples(&mut example_file, &examples).unwrap();
}
Feature Visualization
The Server Customization System includes tools for visualizing feature dependencies:
use navius::core::features::{FeatureRegistry, FeatureVisualizer};
use std::fs::File;
fn generate_feature_visualization(registry: &FeatureRegistry) {
let visualizer = FeatureVisualizer::new(registry);
// Generate a DOT graph of feature dependencies
let dot_graph = visualizer.generate_dot_graph();
// Write to a DOT file
let mut dot_file = File::create("features.dot").unwrap();
dot_file.write_all(dot_graph.as_bytes()).unwrap();
// Generate a dependency tree in ASCII
let ascii_tree = visualizer.generate_ascii_tree();
println!("{}", ascii_tree);
// Generate an HTML visualization
let html = visualizer.generate_html();
let mut html_file = File::create("features.html").unwrap();
html_file.write_all(html.as_bytes()).unwrap();
}
title: "Repository Pattern Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
Repository Pattern Example
This guide demonstrates how to use the generic repository pattern in Navius to manage domain entities.
Overview
The repository pattern provides a separation between the domain model layer and the data access layer. It allows you to:
- Work with domain objects instead of raw data
- Switch data sources without changing business logic
- Test business logic without a real data source
- Implement rich query methods beyond simple CRUD
This pattern is implemented in the Navius framework through these components:
Entity
trait - Defines the core interface for domain objectsRepository<E>
trait - Defines CRUD operations for a specific entity typeRepositoryProvider
trait - Creates repositories for different storage typesGenericRepository<E>
- Type-safe repository facade for easy usage
Basic Example
Here's a simple example of how to use the repository pattern with a User entity:
use uuid::Uuid;
use serde::{Serialize, Deserialize};
use async_trait::async_trait;
use crate::app::models::user_entity::{User, UserRole};
use crate::core::models::Entity;
use crate::core::services::error::ServiceError;
use crate::core::services::repository_service::{GenericRepository, RepositoryService};
async fn user_repository_example() -> Result<(), Box<dyn std::error::Error>> {
// Create the repository service
let mut repo_service = RepositoryService::new();
repo_service.init().await?;
// Create a repository for User entities
let user_repo = GenericRepository::<User>::with_service(&repo_service).await?;
// Create a new user
let user = User::new(
"johndoe".to_string(),
"[email protected]".to_string(),
"John Doe".to_string(),
).with_role(UserRole::Admin);
// Save the user to the repository
let saved_user = user_repo.save(&user).await?;
println!("User saved with ID: {}", saved_user.id);
// Find the user by ID
let found_user = user_repo.find_by_id(saved_user.id()).await?;
if let Some(found_user) = found_user {
println!("Found user: {}", found_user.display_name);
}
// Delete the user
let deleted = user_repo.delete(saved_user.id()).await?;
println!("User deleted: {}", deleted);
Ok(())
}
Creating Custom Entity Types
To create your own entity type, implement the Entity
trait:
use serde::{Deserialize, Serialize};
use uuid::Uuid;
use crate::core::models::Entity;
use crate::core::services::error::ServiceError;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Product {
pub id: Uuid,
pub name: String,
pub price: f64,
pub sku: String,
pub in_stock: bool,
}
impl Entity for Product {
type Id = Uuid;
fn id(&self) -> &Self::Id {
&self.id
}
fn collection_name() -> String {
"products".to_string()
}
fn validate(&self) -> Result<(), ServiceError> {
if self.name.is_empty() {
return Err(ServiceError::validation("Product name cannot be empty"));
}
if self.price <= 0.0 {
return Err(ServiceError::validation("Product price must be positive"));
}
if self.sku.is_empty() {
return Err(ServiceError::validation("SKU cannot be empty"));
}
Ok(())
}
}
impl Product {
pub fn new(name: String, price: f64, sku: String) -> Self {
Self {
id: Uuid::new_v4(),
name,
price,
sku,
in_stock: true,
}
}
}
Using Different Repository Providers
The framework supports different storage providers:
use crate::core::models::RepositoryConfig;
use crate::core::services::repository_service::RepositoryService;
async fn configure_repository_providers() -> Result<(), Box<dyn std::error::Error>> {
// Create repository service
let mut repo_service = RepositoryService::new();
// Configure repository for users with memory storage
let user_config = RepositoryConfig {
provider: "memory".to_string(),
// Other configuration options...
..Default::default()
};
repo_service.register_config("users", user_config);
// Initialize the service
repo_service.init().await?;
// Now repositories will use the configured providers
let user_repo = repo_service.create_typed_repository::<User>().await?;
Ok(())
}
Creating Custom Repository Methods
For specialized query needs beyond basic CRUD, you can create custom repository implementations:
use crate::core::models::{Entity, Repository};
use crate::core::services::error::ServiceError;
use std::marker::PhantomData;
// Example of a custom user repository with specialized methods
pub struct CustomUserRepository<R: Repository<User>> {
inner: R,
_marker: PhantomData<User>,
}
impl<R: Repository<User>> CustomUserRepository<R> {
pub fn new(repository: R) -> Self {
Self {
inner: repository,
_marker: PhantomData,
}
}
// Delegate standard operations to inner repository
pub async fn find_by_id(&self, id: &Uuid) -> Result<Option<User>, ServiceError> {
self.inner.find_by_id(id).await
}
// Add custom methods
pub async fn find_by_email(&self, email: &str) -> Result<Option<User>, ServiceError> {
// Get all users and filter by email
let all_users = self.inner.find_all().await?;
Ok(all_users.into_iter().find(|u| u.email == email))
}
pub async fn find_by_role(&self, role: UserRole) -> Result<Vec<User>, ServiceError> {
// Get all users and filter by role
let all_users = self.inner.find_all().await?;
Ok(all_users.into_iter().filter(|u| u.role == role).collect())
}
}
Testing With Mock Repositories
The repository pattern makes testing business logic easy:
use mockall::predicate::*;
use mockall::mock;
// Generate a mock repository
mock! {
pub UserRepository {}
#[async_trait]
impl Repository<User> for UserRepository {
async fn find_by_id(&self, id: &Uuid) -> Result<Option<User>, ServiceError>;
async fn find_all(&self) -> Result<Vec<User>, ServiceError>;
async fn save(&self, entity: &User) -> Result<User, ServiceError>;
async fn delete(&self, id: &Uuid) -> Result<bool, ServiceError>;
async fn count(&self) -> Result<usize, ServiceError>;
async fn exists(&self, id: &Uuid) -> Result<bool, ServiceError>;
}
}
#[tokio::test]
async fn test_user_service() {
// Create a mock repository
let mut mock_repo = MockUserRepository::new();
// Set expectations
let test_user = User::new(
"testuser".to_string(),
"[email protected]".to_string(),
"Test User".to_string()
);
mock_repo.expect_find_by_id()
.with(eq(test_user.id))
.returning(move |_| Ok(Some(test_user.clone())));
// Create the service with the mock repository
let user_service = UserService::new(GenericRepository::new(Box::new(mock_repo)));
// Test service methods
let result = user_service.find_by_id(*test_user.id()).await.unwrap();
assert!(result.is_some());
assert_eq!(result.unwrap().username, "testuser");
}
Benefits of the Repository Pattern
- Abstraction: Domain logic doesn't need to know about data storage details
- Testability: Easy to test with mock repositories
- Flexibility: Switch storage implementations without changing business logic
- Consistency: Standard interface for all entity types
- Type Safety: Generic repositories provide type-safe operations
- Domain-Driven: Focus on domain objects rather than data structures
- Performance: Repositories can implement caching or optimizations
Best Practices
- Keep entity validation in the
validate()
method - Use the repository service for configuration and creation
- Use specialized repository implementations for complex queries
- Always use transactions for operations that modify multiple entities
- Consider using a facade for related repositories when dealing with aggregates
- Add proper error handling in repository implementations
- Use the generic repository for simple cases, custom repositories for complex ones
title: "Logging Service Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
Logging Service Generalization
This document explains how to use the generic logging interface in the Navius application. The logging service has been redesigned to use a provider-based approach with pluggable implementations.
Core Concepts
LoggingOperations
The LoggingOperations
trait defines the core interface for all loggers:
pub trait LoggingOperations: Send + Sync + 'static {
// Log a message at a specific level
fn log(&self, level: LogLevel, info: LogInfo) -> Result<(), LoggingError>;
// Convenience methods for different log levels
fn trace(&self, info: LogInfo) -> Result<(), LoggingError>;
fn debug(&self, info: LogInfo) -> Result<(), LoggingError>;
fn info(&self, info: LogInfo) -> Result<(), LoggingError>;
fn warn(&self, info: LogInfo) -> Result<(), LoggingError>;
fn error(&self, info: LogInfo) -> Result<(), LoggingError>;
// Log a structured record directly
fn log_structured(&self, record: StructuredLog) -> Result<(), LoggingError>;
// Set global context for all logs
fn with_global_context(&self, key: &str, value: &str) -> Result<(), LoggingError>;
// Control log levels
fn set_level(&self, level: LogLevel) -> Result<(), LoggingError>;
fn get_level(&self) -> LogLevel;
// Flush any buffered logs
async fn flush(&self) -> Result<(), LoggingError>;
// Create a child logger with additional context
fn child(&self, context: &str) -> Arc<dyn LoggingOperations>;
}
LoggingProvider
The LoggingProvider
trait allows different logging implementations to be created:
pub trait LoggingProvider: Send + Sync + 'static {
// Create a new logger instance
async fn create_logger(&self, config: &LoggingConfig) -> Result<Arc<dyn LoggingOperations>, LoggingError>;
// Get provider name
fn name(&self) -> &'static str;
// Check if this provider supports the given configuration
fn supports(&self, config: &LoggingConfig) -> bool;
}
Using the Logging Service
Initialization
Initialize the logging service using the factory method:
use navius::core::logger::{init, LoggingConfig};
async fn setup_logging() -> Result<Arc<dyn LoggingOperations>, LoggingError> {
// Create a default configuration
let config = LoggingConfig::default();
// Initialize the logging system
let logger = init(&config).await?;
// Return the logger instance
Ok(logger)
}
Basic Logging
Log messages at different levels:
use navius::core::logger::{LogInfo, LogLevel};
// Log an info message
logger.info(LogInfo::new("Application started")).unwrap();
// Log a warning with context
logger.warn(
LogInfo::new("Resource limit approaching")
.with_context("memory-service")
.with_field("current_usage", "85%")
).unwrap();
// Log an error with request tracking
logger.error(
LogInfo::new("Authentication failed")
.with_request_id("req-123456")
.with_user_id("[email protected]")
).unwrap();
Structured Logging
Create structured logs for consistent formatting:
use navius::core::logger::{LogInfo, LogLevel, StructuredLog};
use std::collections::HashMap;
// Create structured log fields
let mut fields = HashMap::new();
fields.insert("operation".to_string(), "user-create".to_string());
fields.insert("duration_ms".to_string(), "42".to_string());
// Create log info
let log_info = LogInfo {
message: "Operation completed".to_string(),
context: Some("user-service".to_string()),
module: Some("api".to_string()),
request_id: Some("req-123".to_string()),
user_id: None,
timestamp: Some(chrono::Utc::now()),
additional_fields: fields,
};
// Convert to structured log
let structured_log = StructuredLog::from((LogLevel::Info, log_info));
// Log the structured record
logger.log_structured(structured_log).unwrap();
Advanced: Creating Child Loggers
Child loggers inherit settings but add additional context:
// Create a logger for a specific subsystem
let auth_logger = logger.child("auth-service");
// All logs from this logger will include the auth-service context
auth_logger.info(LogInfo::new("User login successful")).unwrap();
// Create nested child loggers
let oauth_logger = auth_logger.child("oauth");
oauth_logger.debug(LogInfo::new("Token validation")).unwrap();
Implementing a Custom Logger
To create a custom logger implementation:
- Create a struct that implements
LoggingOperations
- Create a provider that implements
LoggingProvider
- Register your provider with the registry
use navius::core::logger::{
LoggingOperations, LoggingProvider, LoggingProviderRegistry,
LogInfo, LogLevel, StructuredLog, LoggingConfig, LoggingError
};
use std::sync::Arc;
use async_trait::async_trait;
// Example custom logger implementation
struct CustomLogger;
impl LoggingOperations for CustomLogger {
fn log(&self, level: LogLevel, info: LogInfo) -> Result<(), LoggingError> {
// Implement your custom logging logic here
println!("[{}] {}", level, info.message);
Ok(())
}
// Implement other required methods...
}
// Custom provider implementation
struct CustomLoggerProvider;
#[async_trait]
impl LoggingProvider for CustomLoggerProvider {
async fn create_logger(&self, _config: &LoggingConfig) -> Result<Arc<dyn LoggingOperations>, LoggingError> {
Ok(Arc::new(CustomLogger))
}
fn name(&self) -> &'static str {
"custom"
}
fn supports(&self, config: &LoggingConfig) -> bool {
config.logger_type == "custom"
}
}
// Register your provider
async fn register_custom_provider() -> Result<Arc<dyn LoggingOperations>, LoggingError> {
let registry = Arc::new(LoggingProviderRegistry::new());
registry.register_provider(Arc::new(CustomLoggerProvider))?;
let config = LoggingConfig {
logger_type: "custom".to_string(),
..Default::default()
};
registry.create_logger_from_config(&config).await
}
Built-in Logger Implementations
Tracing Logger
The default implementation is based on the tracing
crate:
// Create a tracing-based logger
let config = LoggingConfig {
logger_type: "tracing".to_string(),
level: "debug".to_string(),
format: "json".to_string(),
..Default::default()
};
let logger = init(&config).await.unwrap();
Console Logger
A colorized console logger is also provided:
// Create a console logger with colors
let config = LoggingConfig {
logger_type: "console".to_string(),
level: "info".to_string(),
colorize: true,
..Default::default()
};
let logger = init(&config).await.unwrap();
Configuring Logging
The LoggingConfig
struct controls logger behavior:
let config = LoggingConfig {
// Logger implementation to use
logger_type: "tracing".to_string(),
// Minimum log level
level: "info".to_string(),
// Output format
format: "json".to_string(),
// Enable colorized output (for console loggers)
colorize: true,
// Include file path and line number
include_file_info: true,
// Global fields to add to all logs
global_fields: {
let mut fields = HashMap::new();
fields.insert("app_name".to_string(), "navius".to_string());
fields.insert("environment".to_string(), "development".to_string());
fields
},
..Default::default()
};
Best Practices
- Use structured logging: Prefer structured logs over raw strings for better searchability
- Include context: Always add relevant context to logs
- Use child loggers: Create child loggers for subsystems to maintain context
- Set appropriate log levels: Use debug/trace for development and info/warn for production
- Add request IDs: Include request IDs in logs for distributed request tracing
title: "Database Service Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: "Database Service Example" description: "Examples of using the generic database service interfaces and providers" category: examples tags:
- database
- service
- generalization
- providers related:
- examples/repository-pattern-example.md
- roadmaps/25-generic-service-implementations.md
- reference/patterns/repository-pattern.md last_updated: March 27, 2025 version: 1.0
Database Service Example
This example demonstrates how to use the generic database service implementation, including defining providers, configuring the service, and performing database operations.
Overview
The Database Service implementation follows a provider-based architecture that enables:
- Abstracting database operations from specific implementations
- Supporting multiple database types through providers
- Configuration-based selection of database providers
- Easy testing with in-memory database implementations
Core Components
The database service consists of several key components:
- DatabaseOperations Trait: Defines core database operations
- DatabaseProvider Trait: Defines interface for creating database instances
- DatabaseProviderRegistry: Manages and creates database instances
- DatabaseConfig: Configures database connection settings
- InMemoryDatabase: Default in-memory implementation for testing
Basic Usage
Accessing the Database Service
The database service is accessible through the application's service registry:
use crate::core::services::ServiceRegistry;
use crate::core::services::database_service::DatabaseService;
// Get the service from service registry
let db_service = service_registry.get::<DatabaseService>();
// Use the database service
let result = db_service.create_database().await?;
Performing Basic Operations
Once you have a database instance, you can perform operations:
// Get a value
let user_json = db.get("users", "user-123").await?;
// Set a value
db.set("users", "user-456", &user_json_string).await?;
// Delete a value
let deleted = db.delete("users", "user-789").await?;
// Query with a filter
let active_users = db.query("users", "status='active'").await?;
Implementing a Custom Provider
You can implement your own database provider by implementing the DatabaseProvider
trait:
use crate::core::services::database_service::{DatabaseOperations, DatabaseProvider};
use crate::core::services::error::ServiceError;
use async_trait::async_trait;
pub struct MyCustomDatabaseProvider;
#[async_trait]
impl DatabaseProvider for MyCustomDatabaseProvider {
type Database = MyCustomDatabase;
async fn create_database(&self, config: DatabaseConfig) -> Result<Self::Database, ServiceError> {
// Create and return your database implementation
Ok(MyCustomDatabase::new(config))
}
fn supports(&self, config: &DatabaseConfig) -> bool {
config.provider_type == "custom"
}
}
pub struct MyCustomDatabase {
// Your database implementation details
}
#[async_trait]
impl DatabaseOperations for MyCustomDatabase {
async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError> {
// Implement get operation
}
// Implement other required operations...
}
Registering a Provider
Register your custom provider with the database service:
use crate::core::services::database_service::DatabaseProviderRegistry;
// Create a registry
let mut registry = DatabaseProviderRegistry::new();
// Register your provider
registry.register("custom", MyCustomDatabaseProvider);
// Create the database service with the registry
let db_service = DatabaseService::new(registry);
Using the In-Memory Database
The in-memory database provider is useful for testing:
use crate::core::services::memory_database::{InMemoryDatabaseProvider, InMemoryDatabase};
#[tokio::test]
async fn test_database_operations() {
// Create a provider and configuration
let provider = InMemoryDatabaseProvider::new();
let config = DatabaseConfig::default().with_provider("memory");
// Create a database instance
let db = provider.create_database(config).await.unwrap();
// Set a test value
db.set("test", "key1", "value1").await.unwrap();
// Get the value back
let value = db.get("test", "key1").await.unwrap();
assert_eq!(value, Some("value1".to_string()));
}
Configuration
Configure the database service in your application configuration:
# In config/default.yaml
database:
provider: memory # Could be postgres, mongodb, etc.
connection_string: ""
max_connections: 10
connection_timeout_ms: 5000
retry_attempts: 3
enable_logging: true
Loading the configuration:
use crate::core::config::AppConfig;
use crate::core::services::database_service::DatabaseConfig;
// Load from application config
let app_config = AppConfig::load()?;
let db_config = DatabaseConfig::from_app_config(&app_config);
// Or create it programmatically
let db_config = DatabaseConfig::default()
.with_provider("postgres")
.with_connection_string("postgres://user:pass@localhost/dbname")
.with_max_connections(20);
Complete Example
Here's a complete example showing how to set up and use the database service:
use crate::core::services::database_service::{
DatabaseService, DatabaseConfig, DatabaseProviderRegistry
};
use crate::core::services::memory_database::register_memory_database_provider;
async fn setup_database_service() -> Result<DatabaseService, ServiceError> {
// Create a provider registry
let mut registry = DatabaseProviderRegistry::new();
// Register the built-in memory provider
register_memory_database_provider(&mut registry);
// Create configuration
let config = DatabaseConfig::default()
.with_provider("memory")
.with_max_connections(5);
// Create service
let service = DatabaseService::new(registry)
.with_default_config(config);
// Initialize the service
service.init().await?;
Ok(service)
}
async fn example_usage(service: &DatabaseService) -> Result<(), ServiceError> {
// Create a database instance
let db = service.create_database().await?;
// Store user data
let user_data = r#"{"id":"user-123","name":"Alice","role":"admin"}"#;
db.set("users", "user-123", user_data).await?;
// Retrieve user data
if let Some(data) = db.get("users", "user-123").await? {
println!("User data: {}", data);
}
// Query users by role
let admins = db.query("users", "role='admin'").await?;
println!("Found {} admin users", admins.len());
Ok(())
}
Best Practices
- Provider Selection: Choose the appropriate provider based on your requirements
- Error Handling: Always handle database errors properly
- Connection Management: Reuse database connections where possible
- Testing: Use the in-memory database for testing
- Configuration: Externalize database configuration
- Transactions: Use transactions for multi-step operations
- Security: Always sanitize input to prevent injection attacks
Related Documentation
title: "Health Service Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: "Health Service Example" description: "Examples of using the generic health service with custom indicators and providers" category: examples tags:
- health
- service
- monitoring
- indicators
- health-check related:
- roadmaps/25-generic-service-implementations.md
- reference/patterns/health-check-pattern.md
- reference/api/health-api.md last_updated: March 27, 2025 version: 1.0
Health Service Example
This example demonstrates how to use the generic health service implementation, including defining custom health indicators, registering providers, and building health dashboards.
Overview
The Health Service implementation follows a provider-based architecture that enables:
- Dynamic health indicators that can be registered at runtime
- Pluggable health providers for different subsystems
- Automatic discovery of health indicators
- Detailed health reporting with component status history
- Customizable health check aggregation
Core Components
The health service consists of several key components:
- HealthIndicator Trait: Defines interface for individual component health checks
- HealthProvider Trait: Defines interface for providers that create health indicators
- HealthDiscoveryService: Discovers and registers health indicators dynamically
- HealthService: Manages health checks and aggregates results
- HealthDashboard: Tracks health history and provides detailed reporting
Basic Usage
Accessing the Health Service
The health service is accessible through the application's service registry:
use crate::core::services::ServiceRegistry;
use crate::core::services::health::HealthService;
// Get the service from service registry
let health_service = service_registry.get::<HealthService>();
// Get the health status
let health_status = health_service.check_health().await?;
println!("System health: {}", health_status.status);
Implementing a Custom Health Indicator
Create a custom health indicator by implementing the HealthIndicator
trait:
use crate::core::services::health::{HealthIndicator, DependencyStatus};
use std::sync::Arc;
use crate::core::router::AppState;
pub struct DatabaseHealthIndicator {
db_connection_string: String,
}
impl DatabaseHealthIndicator {
pub fn new(connection_string: &str) -> Self {
Self {
db_connection_string: connection_string.to_string(),
}
}
}
impl HealthIndicator for DatabaseHealthIndicator {
fn name(&self) -> String {
"database".to_string()
}
fn check_health(&self, _state: &Arc<AppState>) -> DependencyStatus {
// Check database connection
if let Ok(_) = check_database_connection(&self.db_connection_string) {
DependencyStatus::up()
} else {
DependencyStatus::down()
.with_detail("reason", "Could not connect to database")
.with_detail("connection", &self.db_connection_string)
}
}
// Optional: provide metadata about this indicator
fn metadata(&self) -> std::collections::HashMap<String, String> {
let mut metadata = std::collections::HashMap::new();
metadata.insert("type".to_string(), "database".to_string());
metadata.insert("version".to_string(), "1.0".to_string());
metadata
}
// Optional: set indicator priority (lower runs first)
fn order(&self) -> i32 {
10
}
// Optional: mark as critical (failure means system is down)
fn is_critical(&self) -> bool {
true
}
}
// Helper function to check database connection
fn check_database_connection(connection_string: &str) -> Result<(), Box<dyn std::error::Error>> {
// Actual implementation would connect to database
Ok(())
}
Creating a Health Provider
Create a provider that generates health indicators:
use crate::core::services::health::{HealthIndicator, HealthProvider};
use crate::core::config::AppConfig;
pub struct InfrastructureHealthProvider;
impl HealthProvider for InfrastructureHealthProvider {
fn create_indicators(&self) -> Vec<Box<dyn HealthIndicator>> {
let mut indicators = Vec::new();
// Add database health indicator
indicators.push(Box::new(DatabaseHealthIndicator::new(
"postgres://localhost/app"
)));
// Add disk space indicator
indicators.push(Box::new(DiskSpaceHealthIndicator::new("/data")));
// Add other infrastructure indicators
indicators.push(Box::new(MemoryHealthIndicator::new(90)));
indicators
}
fn is_enabled(&self, config: &AppConfig) -> bool {
// Check if this provider should be enabled
config.get_bool("health.infrastructure_checks_enabled").unwrap_or(true)
}
}
Registering Health Indicators and Providers
Register custom health indicators and providers:
use crate::core::services::health::{HealthService, HealthIndicator, HealthProvider};
// Setup health service with indicators
async fn setup_health_service() -> HealthService {
// Create a new health service
let mut health_service = HealthService::new();
// Register individual indicators
health_service.register_indicator(Box::new(DatabaseHealthIndicator::new(
"postgres://localhost/app"
)));
// Register a provider
health_service.register_provider(Box::new(InfrastructureHealthProvider));
// Initialize service
health_service.init().await.unwrap();
health_service
}
Using the Health Discovery Service
The Health Discovery Service automatically finds and registers health indicators:
use crate::core::services::health_discovery::HealthDiscoveryService;
use crate::core::services::health::HealthService;
async fn setup_with_discovery() -> HealthService {
// Create services
let mut health_service = HealthService::new();
let discovery_service = HealthDiscoveryService::new();
// Discover and register health indicators
let indicators = discovery_service.discover_indicators().await;
for indicator in indicators {
health_service.register_indicator(indicator);
}
// Initialize service
health_service.init().await.unwrap();
health_service
}
Health Dashboard
The Health Dashboard provides detailed health history:
use crate::core::services::health_dashboard::HealthDashboard;
use crate::core::services::health::HealthService;
use std::sync::Arc;
async fn setup_dashboard(health_service: Arc<HealthService>) -> HealthDashboard {
// Create a dashboard with history tracking
let mut dashboard = HealthDashboard::new()
.with_history_size(100) // Keep last 100 status checks
.with_health_service(health_service);
// Start background monitoring
dashboard.start_monitoring(std::time::Duration::from_secs(60)).await;
dashboard
}
Complete Example
Here's a complete example showing how to set up and use the health service:
use crate::core::services::health::{
HealthService, HealthIndicator, DependencyStatus
};
use crate::core::services::health_dashboard::HealthDashboard;
use std::sync::Arc;
use std::collections::HashMap;
// Define a custom health indicator
struct ApiHealthIndicator {
api_url: String,
}
impl ApiHealthIndicator {
fn new(url: &str) -> Self {
Self { api_url: url.to_string() }
}
}
impl HealthIndicator for ApiHealthIndicator {
fn name(&self) -> String {
"external-api".to_string()
}
fn check_health(&self, _state: &Arc<AppState>) -> DependencyStatus {
// In a real implementation, check API availability
if self.api_url.starts_with("https") {
DependencyStatus::up()
} else {
DependencyStatus::down()
.with_detail("error", "Insecure URL")
.with_detail("url", &self.api_url)
}
}
fn metadata(&self) -> HashMap<String, String> {
let mut metadata = HashMap::new();
metadata.insert("type".to_string(), "external-api".to_string());
metadata
}
}
async fn setup_health_system() {
// Create a health service
let mut health_service = HealthService::new();
// Register health indicators
health_service.register_indicator(Box::new(ApiHealthIndicator::new(
"https://api.example.com/status"
)));
// Initialize the service
health_service.init().await.unwrap();
let health_service = Arc::new(health_service);
// Create health dashboard
let dashboard = HealthDashboard::new()
.with_health_service(Arc::clone(&health_service))
.with_history_size(50);
// Check health
let health = health_service.check_health().await.unwrap();
println!("System health: {}", health.status);
// List components
for component in health.components {
println!("{}: {}", component.name, component.status);
for (key, value) in component.details {
println!(" {}: {}", key, value);
}
}
// Get dashboard history
let history = dashboard.get_component_history("external-api").await;
println!("API health history: {} entries", history.len());
// Clear dashboard history
dashboard.clear_history().await;
}
Health API Endpoints
The health service automatically exposes API endpoints:
/actuator/health
- Basic health check (UP/DOWN)/actuator/health/detail
- Detailed health information/actuator/dashboard
- Health dashboard with history
Example response:
{
"status": "UP",
"timestamp": "2025-03-26T12:34:56.789Z",
"components": [
{
"name": "database",
"status": "UP",
"details": {
"type": "postgres",
"version": "14.5"
}
},
{
"name": "external-api",
"status": "DOWN",
"details": {
"error": "Connection timeout",
"url": "https://api.example.com/status"
}
}
]
}
Best Practices
- Critical Components: Mark critical health indicators that should fail the entire system
- Dependency Order: Set the order of health checks to check dependencies first
- Metadata: Include useful metadata in health indicators
- Dashboard History: Configure appropriate history size based on monitoring needs
- Performance: Ensure health checks are lightweight and don't impact system performance
- Security: Don't expose sensitive information in health details
- Timeouts: Set appropriate timeouts for health checks
- Discovery: Use the discovery service to automatically find health indicators
Related Documentation
title: "Cache Provider Example" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: "Cache Provider Example" description: "Examples of using the generic cache service interfaces and providers" category: examples tags:
- cache
- service
- generalization
- providers
- performance related:
- examples/two-tier-cache-example.md
- roadmaps/25-generic-service-implementations.md
- roadmaps/07-enhanced-caching.md last_updated: March 27, 2025 version: 1.0
Cache Provider Example
This example demonstrates how to use the generic cache service implementation, including working with different cache providers, configuring caches, and implementing custom providers.
Overview
The Cache Service implementation follows a provider-based architecture that enables:
- Abstracting cache operations from specific implementations
- Supporting multiple cache types through providers (in-memory, Redis, etc.)
- Configuration-based selection of cache providers
- Layered caching through the two-tier cache implementation
- Consistent interface for all cache operations
Core Components
The cache service architecture consists of several key components:
- CacheOperations Trait: Defines core cache operations (get, set, delete)
- CacheProvider Trait: Defines interface for creating cache instances
- CacheProviderRegistry: Manages and creates cache instances
- CacheConfig: Configures cache settings
- MemoryCacheProvider: Default in-memory implementation
- RedisCacheProvider: Redis-based implementation
- TwoTierCache: Implementation that combines fast and slow caches
Basic Usage
Accessing the Cache Service
The cache service is accessible through the application's service registry:
use crate::core::services::ServiceRegistry;
use crate::core::services::cache_service::CacheService;
// Get the service from service registry
let cache_service = service_registry.get::<CacheService>();
// Create a typed cache for a specific resource
let user_cache = cache_service.create_cache::<UserDto>("users").await?;
Performing Basic Cache Operations
Once you have a cache instance, you can perform operations:
use std::time::Duration;
// Set a value with 5 minute TTL
user_cache.set("user-123", user_dto, Some(Duration::from_secs(300))).await?;
// Get a value
if let Some(user) = user_cache.get("user-123").await {
println!("Found user: {}", user.name);
}
// Delete a value
user_cache.delete("user-123").await?;
// Clear the entire cache
user_cache.clear().await?;
Implementing a Custom Cache Provider
You can implement your own cache provider by implementing the CacheProvider
trait:
use crate::core::services::cache_provider::{CacheOperations, CacheProvider, CacheError};
use crate::core::services::cache_service::CacheConfig;
use async_trait::async_trait;
use std::time::Duration;
use std::marker::PhantomData;
pub struct CustomCacheProvider;
#[async_trait]
impl CacheProvider for CustomCacheProvider {
async fn create_cache<T: Send + Sync + Clone + 'static>(
&self,
config: CacheConfig
) -> Result<Box<dyn CacheOperations<T>>, CacheError> {
Ok(Box::new(CustomCache::<T>::new(config)))
}
fn supports(&self, config: &CacheConfig) -> bool {
config.provider_type == "custom"
}
fn name(&self) -> &str {
"custom"
}
}
pub struct CustomCache<T> {
config: CacheConfig,
_phantom: PhantomData<T>,
// Your cache implementation details here
}
impl<T> CustomCache<T> {
fn new(config: CacheConfig) -> Self {
Self {
config,
_phantom: PhantomData,
}
}
}
#[async_trait]
impl<T: Send + Sync + Clone + 'static> CacheOperations<T> for CustomCache<T> {
async fn get(&self, key: &str) -> Option<T> {
// Implement get operation
None
}
async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError> {
// Implement set operation
Ok(())
}
async fn delete(&self, key: &str) -> Result<bool, CacheError> {
// Implement delete operation
Ok(false)
}
async fn clear(&self) -> Result<(), CacheError> {
// Implement clear operation
Ok(())
}
fn stats(&self) -> crate::core::services::cache_provider::CacheStats {
// Return cache statistics
crate::core::services::cache_provider::CacheStats {
hits: 0,
misses: 0,
size: 0,
max_size: self.config.max_size,
}
}
}
Registering a Provider
Register your custom provider with the cache service:
use crate::core::services::cache_provider::CacheProviderRegistry;
use crate::core::services::cache_service::CacheService;
// Setup cache service with custom provider
async fn setup_cache_service() -> CacheService {
// Create a registry
let mut registry = CacheProviderRegistry::new();
// Register built-in providers
registry.register(Box::new(MemoryCacheProvider::new()));
registry.register(Box::new(RedisCacheProvider::new()));
// Register custom provider
registry.register(Box::new(CustomCacheProvider));
// Create service with registry
let cache_service = CacheService::new(registry);
// Initialize the service
cache_service.init().await.unwrap();
cache_service
}
Using the In-Memory Cache Provider
The in-memory cache provider is useful for local caching and testing:
use crate::core::services::memory_cache::MemoryCacheProvider;
use crate::core::services::cache_service::CacheConfig;
use std::time::Duration;
#[tokio::test]
async fn test_memory_cache() {
// Create a provider and configuration
let provider = MemoryCacheProvider::new();
let config = CacheConfig::default()
.with_name("test-cache")
.with_ttl(Duration::from_secs(60))
.with_max_size(1000);
// Create a cache instance
let cache = provider.create_cache::<String>(config).await.unwrap();
// Set a test value
cache.set("greeting", "Hello, world!".to_string(), None).await.unwrap();
// Get the value back
let value = cache.get("greeting").await;
assert_eq!(value, Some("Hello, world!".to_string()));
}
Using the Redis Cache Provider
The Redis cache provider is used for distributed caching:
use crate::core::services::redis_cache::RedisCacheProvider;
use crate::core::services::cache_service::CacheConfig;
async fn setup_redis_cache() {
// Create a provider and configuration
let provider = RedisCacheProvider::new("redis://localhost:6379");
let config = CacheConfig::default()
.with_provider("redis")
.with_name("user-cache");
// Create a cache instance
let cache = provider.create_cache::<UserDto>(config).await.unwrap();
// Use the cache
// ...
}
Two-Tier Cache Implementation
The two-tier cache combines a fast in-memory cache with a slower persistent cache:
use crate::core::services::cache_service::{TwoTierCache, CacheConfig};
use crate::core::services::memory_cache::MemoryCacheProvider;
use crate::core::services::redis_cache::RedisCacheProvider;
use std::time::Duration;
async fn setup_two_tier_cache() {
// Create providers
let memory_provider = MemoryCacheProvider::new();
let redis_provider = RedisCacheProvider::new("redis://localhost:6379");
// Create memory cache config (shorter TTL)
let memory_config = CacheConfig::default()
.with_provider("memory")
.with_ttl(Duration::from_secs(60))
.with_max_size(1000);
// Create Redis cache config (longer TTL)
let redis_config = CacheConfig::default()
.with_provider("redis")
.with_ttl(Duration::from_secs(3600));
// Create individual caches
let fast_cache = memory_provider.create_cache::<UserDto>(memory_config).await.unwrap();
let slow_cache = redis_provider.create_cache::<UserDto>(redis_config).await.unwrap();
// Create two-tier cache
let two_tier_cache = TwoTierCache::new(fast_cache, slow_cache);
// Use the cache - automatically manages both tiers
two_tier_cache.get("user-123").await;
}
Configuration
Configure the cache service in your application configuration:
# In config/default.yaml
cache:
default_provider: memory
providers:
memory:
enabled: true
max_size: 10000
ttl_seconds: 300
redis:
enabled: true
connection_string: redis://localhost:6379
ttl_seconds: 3600
resources:
users:
provider: memory
ttl_seconds: 60
max_size: 1000
products:
provider: redis
ttl_seconds: 1800
Loading the configuration:
use crate::core::config::AppConfig;
use crate::core::services::cache_service::CacheConfig;
// Load from application config
let app_config = AppConfig::load()?;
let cache_config = CacheConfig::from_app_config(&app_config, "users");
// Or create it programmatically
let cache_config = CacheConfig::default()
.with_provider("memory")
.with_name("users")
.with_ttl(Duration::from_secs(60))
.with_max_size(1000);
Complete Example
Here's a complete example showing how to set up and use the cache service:
use crate::core::services::cache_service::{
CacheService, CacheConfig, CacheOperations
};
use crate::core::services::cache_provider::CacheProviderRegistry;
use crate::core::services::memory_cache::register_memory_cache_provider;
use crate::core::services::redis_cache::register_redis_cache_provider;
use std::time::Duration;
// Example user DTO
#[derive(Clone)]
struct UserDto {
id: String,
name: String,
email: String,
}
async fn setup_cache_service() -> Result<CacheService, CacheError> {
// Create a provider registry
let mut registry = CacheProviderRegistry::new();
// Register providers
register_memory_cache_provider(&mut registry);
register_redis_cache_provider(&mut registry, "redis://localhost:6379");
// Create service
let service = CacheService::new(registry);
// Initialize the service
service.init().await?;
Ok(service)
}
async fn cache_example(service: &CacheService) -> Result<(), CacheError> {
// Create a typed cache for users
let user_cache = service.create_cache::<UserDto>("users").await?;
// Create a user
let user = UserDto {
id: "user-123".to_string(),
name: "Alice".to_string(),
email: "[email protected]".to_string(),
};
// Cache the user with 5 minute TTL
user_cache.set(&user.id, user.clone(), Some(Duration::from_secs(300))).await?;
// Get the user from cache
if let Some(cached_user) = user_cache.get(&user.id).await {
println!("Found user: {}", cached_user.name);
}
// Get cache statistics
let stats = user_cache.stats();
println!("Cache stats - Hits: {}, Misses: {}, Size: {}",
stats.hits, stats.misses, stats.size);
Ok(())
}
Integration with Two-Tier Cache
The generic cache providers can be used with the existing two-tier cache system:
use crate::core::services::cache_service::{TwoTierCache, TwoTierCacheConfig};
async fn two_tier_example(service: &CacheService) -> Result<(), CacheError> {
// Configure two-tier cache
let config = TwoTierCacheConfig::new()
.with_fast_provider("memory")
.with_slow_provider("redis")
.with_fast_ttl(Duration::from_secs(60))
.with_slow_ttl(Duration::from_secs(3600))
.with_promotion_enabled(true);
// Create a two-tier cache
let users_cache = service.create_two_tier_cache::<UserDto>("users", config).await?;
// Use it like a regular cache - automatically manages both tiers
let user = UserDto {
id: "user-456".to_string(),
name: "Bob".to_string(),
email: "[email protected]".to_string(),
};
// Set in both tiers
users_cache.set(&user.id, user.clone(), None).await?;
// Get first tries memory, then Redis if not found
if let Some(cached_user) = users_cache.get(&user.id).await {
println!("Found user: {}", cached_user.name);
}
Ok(())
}
Best Practices
-
Provider Selection: Choose the appropriate provider based on your requirements:
- Memory cache for fast local caching
- Redis cache for distributed caching
- Two-tier cache for balance of performance and durability
-
TTL Management: Set appropriate time-to-live values:
- Shorter TTLs for frequently changing data
- Longer TTLs for relatively static data
- Consider using different TTLs for different cache tiers
-
Cache Invalidation: Implement proper invalidation strategies:
- Delete cache entries when the source data changes
- Use version or timestamp-based invalidation
- Consider using cache groups for bulk invalidation
-
Error Handling: Gracefully handle cache errors:
- Don't let cache failures affect critical operations
- Use fallbacks when cache is unavailable
- Log cache errors for monitoring
-
Performance: Optimize cache usage for performance:
- Cache expensive operations rather than simple lookups
- Monitor cache hit rates and adjust strategies
- Carefully select what to cache based on access patterns
-
Security: Consider security implications:
- Don't cache sensitive information unless necessary
- Encrypt sensitive cached data if required
- Set appropriate permissions on Redis instances
Related Documentation
title: "Contributing Guidelines" description: "Documentation about Contributing Guidelines" category: contributing tags:
- testing last_updated: March 27, 2025 version: 1.0
Contributing Guidelines
This directory contains guidelines and information for contributors to the Navius project.
Contents
- Contributing Guide - Main guide for project contributions
- Testing Implementation Template - Template for implementing tests
- Testing Prompt - Guidance for creating effective tests
Purpose
These documents provide the necessary information and guidelines for contributors to the Navius project. They include contribution workflows, coding standards, testing requirements, and other important information for developers working on the codebase.
title: "Contribution Guide" description: "Step-by-step guide for making contributions to the Navius project" category: "Contributing" tags: ["contributing", "development", "workflow", "guide", "pull request"] last_updated: "April 5, 2025" version: "1.0"
Contributing Guide
Overview
Thank you for your interest in contributing to the Navius project! This guide will walk you through the process of making contributions, from setting up your development environment to submitting your changes for review.
Prerequisites
Before you begin, ensure you have the following installed:
- Rust (latest stable version)
- Git
- A code editor (we recommend VS Code with the Rust extension)
- Docker (for running integration tests)
- PostgreSQL (for local development)
Getting Started
1. Fork the Repository
- Visit the Navius repository
- Click the "Fork" button in the top-right corner
- Clone your fork to your local machine:
git clone https://github.com/YOUR_USERNAME/navius.git cd navius
2. Set Up the Development Environment
-
Add the original repository as an upstream remote:
git remote add upstream https://github.com/example/navius.git
-
Install project dependencies:
cargo build
-
Set up the database:
./scripts/setup_db.sh
-
Run the test suite to make sure everything is working:
cargo test
Making Changes
1. Create a Feature Branch
Always create a new branch for your changes:
git checkout -b feature/your-feature-name
Use a descriptive branch name that reflects the changes you're making.
2. Development Workflow
- Make your changes in small, focused commits
- Follow our coding standards
- Include tests for your changes
- Update documentation as needed
Code Style
- Run
cargo fmt
before committing to ensure your code follows our style guidelines - Use
cargo clippy
to catch common mistakes and improve your code
Testing
- Write unit tests for all new functions
- Create integration tests for API endpoints
- Run tests with
cargo test
before submitting your changes
3. Keep Your Branch Updated
Regularly sync your branch with the upstream repository:
git fetch upstream
git rebase upstream/main
Resolve any conflicts that arise during the rebase.
Submitting Your Contribution
1. Prepare Your Changes
Before submitting, make sure:
- All tests pass:
cargo test
- Code passes linting:
cargo clippy
- Your code is formatted:
cargo fmt
- You've added/updated documentation
2. Create a Pull Request
-
Push your changes to your fork:
git push origin feature/your-feature-name
-
Go to the Navius repository
-
Click "Pull Requests" and then "New Pull Request"
-
Select your fork and the feature branch containing your changes
-
Provide a clear title and description for your pull request:
- What changes does it introduce?
- Why are these changes necessary?
- How do these changes address the issue?
- Any specific areas you'd like reviewers to focus on?
-
Link any related issues by including "Fixes #issue-number" or "Relates to #issue-number" in the description
3. Code Review Process
- Wait for the CI/CD pipeline to complete
- Address any feedback from reviewers
- Make requested changes in new commits
- Push the changes to the same branch
- Mark resolved conversations as resolved
See our Code Review Process for more details.
Types of Contributions
Bug Fixes
- Check if the bug is already reported in the issues
- If not, create a new issue describing the bug
- Follow the steps above to submit a fix
Features
- For significant features, open an issue to discuss the proposal first
- Once consensus is reached, implement the feature
- Include comprehensive tests and documentation
Documentation
- For typos and minor corrections, you can edit directly on GitHub
- For significant changes, follow the standard contribution process
- Follow our Documentation Standards
Local Development Tips
Running the Application
cargo run
Visit http://localhost:8080
to see the application running.
Debugging
- Use
println!()
or thelog
crate for debugging - For more advanced debugging, VS Code's Rust debugger works well
Common Issues
- Database connection errors: Ensure PostgreSQL is running and credentials are correct
- Compilation errors: Run
cargo clean
followed bycargo build
- Test failures: Check for environment-specific issues like file permissions
Contributor Expectations
- Follow our Code of Conduct
- Be respectful and constructive in discussions
- Respond to feedback in a timely manner
- Help review other contributions when possible
Recognition
Contributors are recognized in several ways:
- Added to the contributors list in the README
- Mentioned in release notes for significant contributions
- Potential for direct commit access after consistent quality contributions
Related Resources
Thank you for contributing to Navius! Your efforts help make this project better for everyone.
title: "Code of Conduct" description: "Guidelines for participation in the Navius project community" category: "Contributing" tags: ["code of conduct", "community", "guidelines", "ethics", "respect"] last_updated: "April 5, 2025" version: "1.0"
Code of Conduct
Our Pledge
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Our Standards
Examples of behavior that contributes to a positive environment include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall community and project
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
Enforcement Responsibilities
Project maintainers are responsible for clarifying and enforcing our standards of acceptable behavior. They will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned with this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Scope
This Code of Conduct applies within all community spaces, including the project repository, discussions, issue trackers, and all other platforms used by our community. It also applies when an individual is officially representing the community in public spaces.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the project maintainers responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly.
All project maintainers are obligated to respect the privacy and security of the reporter of any incident.
Enforcement Guidelines
Project maintainers will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
1. Correction
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from project maintainers, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
2. Warning
Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
3. Temporary Ban
Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
4. Permanent Ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.
Reporting Guide
If you believe someone is violating the code of conduct, we ask that you report it by emailing [email protected]. All reports will be kept confidential. In your report please include:
- Your contact information
- Names (real, nicknames, or pseudonyms) of any individuals involved
- Your account of what occurred and if you believe the incident is ongoing
- Any additional information that may be helpful
After filing a report, a representative will contact you personally, review the incident, follow up with any additional questions, and make a decision as to how to respond. If the person who is harassing you is part of the response team, they will recuse themselves from handling your incident. If the complaint originates from a member of the response team, it will be handled by a different member of the response team. We will respect confidentiality requests for the purpose of protecting victims of abuse.
Feedback
This Code of Conduct is a living document and may be amended in the future as needed. Feedback is welcome and can be submitted through the standard issue or pull request workflow.
title: "Development Process" description: "A guide to the development workflow and processes for contributing to Navius" category: contributing tags:
- contributing
- development
- workflow
- process
- guidelines related:
- contributing/README.md
- contributing/testing-guidelines.md
- contributing/code-of-conduct.md
- architecture/project-structure.md last_updated: March 27, 2025 version: 1.0
Development Process
This document outlines the development process for contributing to the Navius framework, including workflows, best practices, and guidance for specific feature types.
Table of Contents
- Development Environment Setup
- Development Workflow
- Branching Strategy
- Code Review Process
- Testing Requirements
- Documentation Requirements
- Working with Complex Features
- Release Process
Development Environment Setup
-
Clone the Repository
git clone https://gitlab.com/ecoleman2/navius.git cd navius
-
Install Dependencies
cargo build
-
Setup Development Tools
- Configure IDE (VS Code recommended)
- Install extensions (Rust Analyzer, etc.)
- Setup linting (rustfmt, clippy)
-
Start Local Development Environment
# Start required services docker-compose -f .devtools/docker-compose.yml up -d # Run the application cargo run
Development Workflow
-
Issue Assignment
- Select an issue from the issue tracker
- Assign it to yourself
- Move it to "In Progress" on the board
-
Create a Branch
- Create a branch from
main
with a descriptive name - Branch names should follow the pattern:
feature/feature-name
orbugfix/issue-description
- Create a branch from
-
Implement Changes
- Follow the code style guidelines
- Implement tests for your changes
- Update documentation as necessary
-
Run Tests
- Run unit tests:
cargo test
- Run integration tests:
cargo test --test '*'
- Check code style:
cargo fmt --check
- Run linting:
cargo clippy
- Run unit tests:
-
Create a Merge Request
- Push your branch to the remote repository
- Create a merge request with a clear description
- Link the related issue(s)
- Request a review from the appropriate team members
-
Address Review Feedback
- Respond to review comments
- Make necessary changes
- Push updates to your branch
-
Merge the Changes
- Once approved, your merge request will be merged
- The CI/CD pipeline will deploy the changes
Branching Strategy
We follow a simplified GitFlow approach:
main
: Stable code that has passed all testsfeature/*
: New features or improvementsbugfix/*
: Bug fixesrelease/*
: Release preparation brancheshotfix/*
: Urgent fixes for production issues
Code Review Process
-
Checklist for Submitters
- Code follows project standards
- Tests are included and pass
- Documentation is updated
- No unnecessary dependencies added
- Performance considerations addressed
- Security implications considered
-
Checklist for Reviewers
- Code quality and style
- Test coverage and quality
- Documentation completeness
- Architecture and design patterns
- Security and performance
- Compatibility and backward compatibility
Testing Requirements
All contributions must include appropriate tests:
- Unit Tests: Test individual functions and methods
- Integration Tests: Test interactions between components
- Property Tests: For complex algorithms or data structures
- Performance Tests: For performance-critical code
Minimum coverage requirements:
- Core modules: 90%
- Utility code: 80%
- Overall project: 85%
Documentation Requirements
All code contributions must be accompanied by appropriate documentation:
-
Code Documentation
- Public APIs must have doc comments
- Complex algorithms must be explained
- Function parameters and return values must be documented
-
User Documentation
- New features need user documentation
- Examples of how to use the feature
- Configuration options
-
Architecture Documentation
- Major changes should include design considerations
- Data flow diagrams for complex features
- API interaction diagrams where relevant
Working with Complex Features
Caching Features
When working with caching features like the Two-Tier Cache:
-
Performance Considerations
- Always benchmark before and after changes
- Consider memory usage and optimization
- Test with realistic data volumes
-
Consistency Requirements
- Ensure proper synchronization between cache layers
- Implement clear invalidation strategies
- Test race conditions and concurrent access
-
Error Handling
- Graceful degradation when Redis is unavailable
- Clear error messages and logging
- Recovery mechanisms
-
Testing Approach
- Mock Redis for unit tests
- Use real Redis for integration tests
- Test cache miss/hit scenarios
- Test cache invalidation
- Test concurrent access
Server Customization System
When working with the Server Customization System:
-
Feature Flags
- Ensure clean separation of features
- Test all combinations of feature flags
- Document impacts of each feature flag
-
Build System Integration
- Test compilation with various feature combinations
- Ensure proper dependency resolution
- Verify binary size optimization
-
Default Configurations
- Provide sensible defaults
- Document recommended configurations
- Test startup with various configurations
Release Process
-
Versioning
- We follow Semantic Versioning
- Breaking changes increment the major version
- New features increment the minor version
- Bug fixes increment the patch version
-
Release Preparation
- Update version numbers
- Update CHANGELOG.md
- Create a release branch
- Run final tests
-
Publishing
- Tag the release
- Create release notes
- Publish to crates.io
- Announce to the community
-
Post-Release
- Monitor for issues
- Update documentation website
- Plan next release
Troubleshooting Common Issues
Build Failures
If you encounter build failures:
- Update dependencies:
cargo update
- Clean build artifacts:
cargo clean
- Check for compatibility issues between dependencies
- Verify your Rust version:
rustc --version
Test Failures
If tests are failing:
- Run specific failing tests with verbose output:
cargo test test_name -- --nocapture
- Check for environment-specific issues
- Verify test dependencies are installed
- Check for race conditions in async tests
Performance Issues
If you encounter performance issues:
- Profile with
cargo flamegraph
- Check for memory leaks with appropriate tools
- Look for inefficient algorithms or data structures
- Consider parallelization opportunities
Additional Resources
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
Testing Guidelines
This document outlines testing guidelines for Navius components, with special focus on testing complex features like the Two-Tier Cache implementation.
Table of Contents
- General Testing Principles
- Test Types
- Test Structure
- Testing Complex Components
- Test Coverage Requirements
- Mocking and Test Doubles
- CI/CD Integration
General Testing Principles
- Test Isolation: Each test should be isolated from others and not depend on external state
- Coverage: Aim for high test coverage, but focus on critical paths and edge cases
- Test Behavior: Test the behavior of components, not their implementation details
- Reliability: Tests should be reliable and not produce flaky results
- Performance: Tests should execute quickly to support rapid development
- Readability: Tests should be easy to understand and maintain
Test Types
Unit Tests
- Test individual functions and methods in isolation
- Mock external dependencies
- Focus on specific behavior
#![allow(unused)] fn main() { #[test] fn test_cache_key_formatting() { let key = format_cache_key("user", "123"); assert_eq!(key, "user:123"); } }
Integration Tests
- Test interactions between components
- Use real implementations or realistic mocks
- Focus on component boundaries
#![allow(unused)] fn main() { #[tokio::test] async fn test_cache_with_redis() { let redis = MockRedisClient::new(); let cache = RedisCache::new(redis); cache.set("test", b"value", None).await.unwrap(); let result = cache.get("test").await.unwrap(); assert_eq!(result, b"value"); } }
End-to-End Tests
- Test the entire system as a whole
- Use real external dependencies where possible
- Focus on user scenarios
#![allow(unused)] fn main() { #[tokio::test] async fn test_user_service_with_cache() { let app = test_app().await; // Create a user let user_id = app.create_user("[email protected]").await.unwrap(); // First request should hit the database let start = Instant::now(); let user1 = app.get_user(user_id).await.unwrap(); let first_request_time = start.elapsed(); // Second request should hit the cache let start = Instant::now(); let user2 = app.get_user(user_id).await.unwrap(); let second_request_time = start.elapsed(); // Verify cache is faster assert!(second_request_time < first_request_time); // Verify data is the same assert_eq!(user1, user2); } }
Test Structure
Follow the AAA pattern for test structure:
- Arrange: Set up the test conditions
- Act: Execute the code under test
- Assert: Verify the expected outcome
#![allow(unused)] fn main() { #[test] fn test_cache_ttl() { // Arrange let mock_clock = MockClock::new(); let cache = InMemoryCache::new_with_clock(100, mock_clock.clone()); // Act - Set a value with TTL cache.set("key", b"value", Some(Duration::from_secs(5))).unwrap(); // Assert - Value exists before expiration assert_eq!(cache.get("key").unwrap(), b"value"); // Act - Advance time past TTL mock_clock.advance(Duration::from_secs(6)); // Assert - Value is gone after expiration assert!(cache.get("key").is_err()); } }
Testing Complex Components
Testing Cache Implementations
Testing caching components requires special attention to:
-
Cache Hit/Miss Scenarios
#![allow(unused)] fn main() { #[tokio::test] async fn test_cache_hit_miss() { let cache = create_test_cache().await; // Test cache miss let result = cache.get("missing-key").await; assert!(result.is_err()); assert!(matches!(result.unwrap_err(), AppError::NotFound { .. })); // Set value and test cache hit cache.set("test-key", b"value", None).await.unwrap(); let result = cache.get("test-key").await.unwrap(); assert_eq!(result, b"value"); } }
-
TTL Behavior
#![allow(unused)] fn main() { #[tokio::test] async fn test_cache_ttl() { let cache = create_test_cache().await; // Set with short TTL cache.set("expires", b"value", Some(Duration::from_millis(100))).await.unwrap(); // Verify exists let result = cache.get("expires").await.unwrap(); assert_eq!(result, b"value"); // Wait for expiration tokio::time::sleep(Duration::from_millis(150)).await; // Verify expired let result = cache.get("expires").await; assert!(result.is_err()); } }
-
Two-Tier Cache Promotion
#![allow(unused)] fn main() { #[tokio::test] async fn test_two_tier_promotion() { let fast_cache = MockCache::new("fast"); let slow_cache = MockCache::new("slow"); // Configure mocks fast_cache.expect_get().with(eq("key")).return_error(AppError::not_found("key")); slow_cache.expect_get().with(eq("key")).return_once(|_| Ok(b"value".to_vec())); fast_cache.expect_set().with(eq("key"), eq(b"value".to_vec()), any()).return_once(|_, _, _| Ok(())); let two_tier = TwoTierCache::new( Box::new(fast_cache), Box::new(slow_cache), true, // promote_on_get None, None, ); // Item should be fetched from slow cache and promoted to fast cache let result = two_tier.get("key").await.unwrap(); assert_eq!(result, b"value"); } }
-
Redis Unavailability
#![allow(unused)] fn main() { #[tokio::test] async fn test_redis_unavailable() { let config = CacheConfig { redis_url: "redis://nonexistent:6379", // other config... }; // Create cache with invalid Redis URL let cache = create_memory_only_two_tier_cache(&config, None).await; // Should still work using just the memory cache cache.set("test", b"value", None).await.unwrap(); let result = cache.get("test").await.unwrap(); assert_eq!(result, b"value"); } }
-
Concurrent Operations
#![allow(unused)] fn main() { #[tokio::test] async fn test_concurrent_operations() { let cache = create_test_cache().await; // Spawn multiple tasks writing to the same key let mut handles = vec![]; for i in 0..10 { let cache_clone = cache.clone(); let handle = tokio::spawn(async move { let value = format!("value-{}", i).into_bytes(); cache_clone.set("concurrent-key", value, None).await.unwrap(); }); handles.push(handle); } // Wait for all operations to complete for handle in handles { handle.await.unwrap(); } // Verify key exists let result = cache.get("concurrent-key").await; assert!(result.is_ok()); } }
Testing Server Customization
For server customization components, focus on:
- Feature Flag Combinations
- Feature Dependency Resolution
- Configuration Validation
- Build System Integration
Test Coverage Requirements
Aim for the following coverage levels:
Component Type | Minimum Coverage |
---|---|
Core Services | 90% |
Cache Implementations | 95% |
Utilities | 80% |
API Handlers | 85% |
Configuration | 90% |
Mocking and Test Doubles
-
Use Mock Implementations for External Dependencies
#![allow(unused)] fn main() { #[derive(Clone)] struct MockRedisClient { data: Arc<RwLock<HashMap<String, Vec<u8>>>>, } #[async_trait] impl RedisClient for MockRedisClient { async fn get(&self, key: &str) -> Result<Option<Vec<u8>>, RedisError> { let data = self.data.read().await; Ok(data.get(key).cloned()) } async fn set(&self, key: &str, value: Vec<u8>, ttl: Option<Duration>) -> Result<(), RedisError> { let mut data = self.data.write().await; data.insert(key.to_string(), value); Ok(()) } // Other methods... } }
-
Inject Test Doubles
#![allow(unused)] fn main() { #[tokio::test] async fn test_cache_with_mock_redis() { let redis = MockRedisClient::new(); let cache = RedisCache::new(Arc::new(redis)); // Test cache operations... } }
-
Use Test Fixtures for Common Setup
#![allow(unused)] fn main() { async fn create_test_cache() -> Arc<Box<dyn DynCacheOperations>> { let config = CacheConfig { redis_url: "redis://localhost:6379".to_string(), // other test config... }; // Use in-memory implementation for tests create_memory_only_two_tier_cache(&config, None).await } }
CI/CD Integration
-
Run Tests on Every PR
# In .gitlab-ci.yml test: stage: test script: - cargo test
-
Track Code Coverage
coverage: stage: test script: - cargo install cargo-tarpaulin - cargo tarpaulin --out Xml - upload-coverage coverage.xml
-
Enforce Coverage Thresholds
coverage: stage: test script: - cargo tarpaulin --out Xml --fail-under 85
Frequently Asked Questions
How to test async code?
Use the tokio::test
attribute for async tests:
#![allow(unused)] fn main() { #[tokio::test] async fn test_async_cache_operations() { let cache = create_test_cache().await; // Test async operations... } }
How to test error handling?
Test both success and error cases:
#![allow(unused)] fn main() { #[tokio::test] async fn test_cache_error_handling() { let cache = create_test_cache().await; // Test missing key let result = cache.get("nonexistent").await; assert!(result.is_err()); // Test invalid serialization let typed_cache = cache.get_typed_cache::<User>(); let result = typed_cache.get("invalid-json").await; assert!(result.is_err()); } }
How to test with real Redis?
For integration tests, use a real Redis instance:
#![allow(unused)] fn main() { #[tokio::test] async fn test_with_real_redis() { // Skip if Redis is not available if !is_redis_available("redis://localhost:6379").await { println!("Skipping test: Redis not available"); return; } let config = CacheConfig { redis_url: "redis://localhost:6379".to_string(), // other config... }; let cache = create_two_tier_cache(&config, None).await.unwrap(); // Test with real Redis... } }
title: "Developer Onboarding Guide" description: "# Find files in the auth component" category: contributing tags:
- api
- architecture
- authentication
- caching
- database
- development
- documentation
- integration
- redis
- testing last_updated: March 27, 2025 version: 1.0
Developer Onboarding Guide
Updated At: March 23, 2025
Welcome to the Navius project! This guide will help you get started as a developer on the project.
Getting Started
Prerequisites
Before you begin, ensure you have the following installed:
- Rust (latest stable version)
- Cargo (comes with Rust)
- Git
- Docker and Docker Compose
- VS Code (recommended) or your preferred IDE
Setting Up Your Development Environment
- Clone the repository:
git clone https://gitlab.com/navius/navius.git
cd navius
- Set up environment variables:
Create a .env
file in the project root with the following variables:
RUST_LOG=debug
CONFIG_DIR=./config
RUN_ENV=development
- Install IDE extensions:
For VS Code, set up the recommended extensions:
mkdir -p .vscode
cp .devtools/ide/vscode/* .vscode/
Then restart VS Code and install the recommended extensions when prompted.
- Build the project:
cargo build
This will also generate the API clients from OpenAPI specifications.
- Run the tests:
cargo test
Project Structure
Navius follows a modular architecture with a clean separation of concerns. See the Project Navigation Guide for a detailed explanation of the codebase structure.
Key directories:
src/core/
- Core business logic and framework functionalitysrc/app/
- User-extensible application codeconfig/
- Configuration filesdocs/
- Documentation.devtools/
- Development tools and scripts
Development Workflow
Running the Server
To run the development server:
.devtools/scripts/run_dev.sh
Adding a New Feature
- Create a feature branch:
git checkout -b feature/your-feature-name
- Implement the feature:
- Add routes in
src/app/router.rs
- Implement handlers in
src/app/api/
- Add business logic in
src/app/services/
- Add tests for your feature
- Run tests:
cargo test
- Verify code style:
cargo clippy
cargo fmt --check
- Create a merge request:
Push your changes and create a merge request on GitLab.
Useful Development Scripts
The project includes several helper scripts in the .devtools/scripts/
directory:
run_dev.sh
- Run the development serverregenerate_api.sh
- Regenerate API clients from OpenAPI specsnavigate.sh
- Help navigate the codebaseverify-structure.sh
- Verify the project structure
Example usage:
# Find files in the auth component
.devtools/scripts/navigate.sh component auth
# Trace a request flow
.devtools/scripts/navigate.sh flow "GET /users"
# Verify project structure
.devtools/scripts/verify-structure.sh
Debugging
VS Code launch configurations are provided for debugging:
- Open the "Run and Debug" panel in VS Code
- Select "Debug Navius Server" to debug the server
- Set breakpoints in your code
- Start debugging (F5)
For debugging tests, use the "Debug Unit Tests" configuration.
Architecture Overview
Navius follows clean architecture principles:
-
Core Layer (
src/core/
):- Contains the core business logic
- Independent from external frameworks
- Defines interfaces for external dependencies
-
Application Layer (
src/app/
):- User-extensible scaffolding
- Uses core functionality
- Provides extension points for customization
-
Framework Integration:
- Uses Axum for web framework
- SQLx for database access
- Redis for caching
See the Module Dependencies Diagram for a visual representation of the architecture.
Code Examples
Here are practical examples to help you understand how to work with the Navius codebase:
Adding an API Endpoint
Create a new handler in your application's API directory:
#![allow(unused)] fn main() { // src/app/api/users.rs use axum::{ Json, extract::{Path, State}, }; use std::sync::Arc; use tracing::info; use crate::core::{ error::{AppError, Result}, router::AppState, }; use crate::app::services::user_service::UserService; pub async fn get_user_handler( State(state): State<Arc<AppState>>, Path(user_id): Path<String>, ) -> Result<Json<User>> { info!("π User lookup requested for ID: {}", user_id); // Access user service from state let user_service = &state.user_service; // Fetch user from service let user = user_service.get_user_by_id(&user_id).await?; // Return JSON response Ok(Json(user)) } }
Adding Routes
Register your new endpoints in the application router:
#![allow(unused)] fn main() { // In src/app/router.rs use crate::app::api::users::{get_user_handler, create_user_handler}; // Inside your router function pub fn app_routes() -> axum::Router<Arc<AppState>> { let router = axum::Router::new(); // Public routes (no authentication) let public_routes = Router::new() .route("/users/:id", get(get_user_handler)); // Full access routes (require authentication) let full_access_routes = Router::new() .route("/users", post(create_user_handler)); // Combine routes router .merge(public_routes) .nest("/full", full_access_routes) } }
Creating a Service
Implement a service for business logic:
#![allow(unused)] fn main() { // src/app/services/user_service.rs use async_trait::async_trait; use crate::core::error::Result; use crate::app::models::user_entity::User; use crate::core::repository::UserRepository; #[async_trait] pub trait UserService: Send + Sync { async fn get_user_by_id(&self, id: &str) -> Result<User>; async fn create_user(&self, user: User) -> Result<User>; } pub struct DefaultUserService { user_repository: Arc<dyn UserRepository>, } impl DefaultUserService { pub fn new(user_repository: Arc<dyn UserRepository>) -> Self { Self { user_repository } } } #[async_trait] impl UserService for DefaultUserService { async fn get_user_by_id(&self, id: &str) -> Result<User> { self.user_repository.find_by_id(id).await } async fn create_user(&self, user: User) -> Result<User> { self.user_repository.save(user).await } } }
Using the Cache System
Implement a handler that uses the caching system:
#![allow(unused)] fn main() { // src/app/api/products.rs use axum::{ Json, extract::{Path, State}, }; use std::sync::Arc; use crate::core::{ error::Result, router::AppState, utils::api_resource::{ApiHandlerOptions, ApiResource, create_api_handler}, }; use crate::models::Product; impl ApiResource for Product { type Id = String; fn resource_type() -> &'static str { "product" } fn api_name() -> &'static str { "ProductAPI" } } pub async fn get_product_handler( State(state): State<Arc<AppState>>, Path(id): Path<String>, ) -> Result<Json<Product>> { // Define the fetch function let fetch_fn = move |state: &Arc<AppState>, id: String| -> futures::future::BoxFuture<'static, Result<Product>> { let state = state.clone(); Box::pin(async move { // Your product fetch logic here state.product_service.get_product(&id).await }) }; // Create a handler with caching enabled let handler = create_api_handler( fetch_fn, ApiHandlerOptions { use_cache: true, use_retries: true, max_retry_attempts: 3, cache_ttl_seconds: state.config.cache.ttl_seconds, detailed_logging: true, }, ); // Execute the handler handler(State(state), Path(id)).await } }
Testing Your Code
Write unit tests for your implementations:
#![allow(unused)] fn main() { // In your service implementation file #[cfg(test)] mod tests { use super::*; use mockall::predicate::*; use crate::core::repository::MockUserRepository; #[tokio::test] async fn test_get_user_by_id() { // Arrange let mut mock_repo = MockUserRepository::new(); let user_id = "user-123"; let expected_user = User { id: user_id.to_string(), name: "Test User".to_string(), email: "[email protected]".to_string(), }; mock_repo .expect_find_by_id() .with(eq(user_id)) .returning(move |_| { Ok(expected_user.clone()) }); let service = DefaultUserService::new(Arc::new(mock_repo)); // Act let result = service.get_user_by_id(user_id).await; // Assert assert!(result.is_ok()); let user = result.unwrap(); assert_eq!(user.id, user_id); assert_eq!(user.name, "Test User"); } } }
Documentation
All features should be documented. The project uses the following documentation structure:
docs/guides/
- User guides and tutorialsdocs/reference/
- API and technical referencedocs/architecture/
- Architecture documentationdocs/contributing/
- Contribution guidelinesdocs/roadmaps/
- Development roadmaps
Getting Help
If you need help with the codebase:
- Consult the Project Navigation Guide
- Use the navigation scripts to explore the codebase
- Read the documentation in the
docs/
directory - Reach out to the team on the project's communication channels
Related Documents
- Contributing Guide - How to contribute to the project
- Development Setup - Setting up your development environment
title: Navius Guides description: Comprehensive guides for using and extending the Navius framework category: guides tags:
- guides
- development
- features
- deployment related:
- ../getting-started/README.md
- ../reference/README.md last_updated: March 27, 2025 version: 1.0
Navius Framework Guides
Overview
This section contains comprehensive guides for using the Navius framework. These guides are process-oriented, explaining how to accomplish various tasks with Navius.
Development Guides
Guides for the development workflow with Navius:
- Development Workflow - Daily development process and tools
- Testing Guide - Comprehensive guide to testing Navius applications
- Project Navigation - Navigating the Navius codebase effectively
Feature Implementation Guides
Guides for implementing specific features:
- API Design - Best practices for designing APIs in Navius
- API Integration - Integrating with external APIs
- Authentication - Implementing authentication in your application
- Authorization - Implementing authorization and access control
- Database Access - Working with databases in Navius
- Caching - Implementing efficient caching strategies
- Validation - Validating input data
- Error Handling - Handling errors gracefully
- Logging - Implementing logging in your application
- WebSocket Support - Real-time communication with WebSockets
- File Upload - Handling file uploads
Deployment Guides
Guides for deploying Navius applications:
- Production Deployment - Deploying to production environments
- Cloud Deployment - Deploying to major cloud platforms
- Docker Deployment - Using Docker for deployment
- Kubernetes Deployment - Deploying with Kubernetes
- Continuous Integration - Setting up CI/CD pipelines
Performance Guides
Guides for optimizing performance:
- Performance Tuning - Optimizing application performance
- Load Testing - Testing application under load
- Database Optimization - Optimizing database performance
- Caching Strategies - Advanced caching techniques
Integration Guides
Guides for integrating with other systems:
- Email Integration - Sending emails from your application
- Payment Integration - Integrating payment processors
- File Storage - Working with cloud storage services
- Search Integration - Integrating search engines
How to Use These Guides
Each guide is written as a step-by-step tutorial, designed to help you accomplish specific tasks. We recommend:
- Start with the Development Workflow guide to understand the basic development process
- Explore specific feature guides based on your project requirements
- Refer to deployment guides when ready to deploy your application
Related Sections
- Getting Started - Quick start guides for beginners
- Reference Documentation - Technical reference information
- Contributing - Guidelines for contributing to Navius
title: Development Guides description: "Comprehensive guides for developing applications with Navius, including development workflow, testing practices, and debugging techniques" category: guides tags:
- development
- testing
- debugging
- workflow
- best-practices
- tooling
- code-quality related:
- ../README.md
- ../../reference/architecture/principles.md
- ../features/README.md last_updated: April 8, 2025 version: 1.1
Development Guides
This section provides comprehensive guidance for developing applications with Navius. These guides cover development workflows, testing practices, debugging techniques, and best practices for writing high-quality Rust code.
Getting Started
For new developers, we recommend following this learning progression:
- Development Setup - Setting up your development environment
- IDE Setup - Configuring your development environment for optimal productivity
- Git Workflow - Understanding version control practices for Navius
- Development Workflow - Understanding the development process
- Testing Guide - Learning comprehensive testing practices
- Debugging Guide - Mastering debugging techniques
Available Guides
Core Development
- Development Setup - Setting up your development environment
- Development Workflow - Day-to-day development process
- Project Navigation - Understanding the codebase organization
- Development Guide - General development guidelines and best practices
Testing and Quality Assurance
- Testing Guide - Comprehensive guide to testing Navius applications
- Unit testing, integration testing, API testing, and E2E testing
- Test organization and best practices
- Coverage measurement and requirements
- Mocking and test doubles
- Continuous integration setup
- Testing - Overview of testing strategies and tools
Debugging and Troubleshooting
- Debugging Guide - Complete guide to debugging Navius applications
- Common debugging scenarios and solutions
- Debugging tools and techniques
- Logging and tracing configuration
- Rust-specific debugging approaches
- Performance debugging
- Database and API debugging
- Production debugging strategies
Development Tools and Workflows
- IDE Setup - Complete guide to setting up your development environment
- VS Code, JetBrains IDEs, and Vim/Neovim configuration
- Essential extensions and plugins
- Debugging configuration
- Performance optimization
- Troubleshooting common IDE issues
- Git Workflow - Comprehensive guide to version control with Navius
- Branching strategy and naming conventions
- Commit message format and best practices
- Pull request and code review workflows
- Advanced Git techniques and troubleshooting
- CI/CD integration
Development Best Practices
When developing with Navius, follow these key principles:
-
Code Quality
- Follow Rust coding standards and our style conventions
- Write clean, expressive, and self-documenting code
- Apply the principle of least surprise
- Use appropriate error handling strategies
-
Testing
- Practice test-driven development when possible
- Maintain high test coverage (minimum 80% for business logic)
- Test both success and failure paths
- Include unit, integration, and API tests
- Follow the testing practices in our Testing Guide
-
Version Control
- Follow the Git Workflow guidelines
- Create focused, single-purpose branches
- Write meaningful commit messages using conventional commits
- Keep pull requests manageable in size
- Review code thoroughly and constructively
-
Documentation
- Document all public APIs and important functions
- Maintain up-to-date README files
- Include examples in documentation
- Document breaking changes clearly
Development Tools
Essential tools for Navius development:
-
IDE and Editor Setup
- VS Code with Rust Analyzer (recommended)
- JetBrains CLion with Rust plugin
- See IDE Setup for complete configuration
-
Rust Tools
- rustc 1.70+ (Rust compiler)
- cargo (package manager)
- rustfmt (code formatter)
- clippy (linter)
-
Testing Tools
- cargo test (test runner)
- cargo-tarpaulin or grcov (code coverage)
- mockall (mocking framework)
- criterion (benchmarking)
-
Development Environment
- Git
- Docker for services
- PostgreSQL
- Redis
Related Resources
- Architecture Principles - Core architectural concepts
- API Reference - API documentation
- Feature Guides - Feature implementation guides
- Deployment Guides - Deployment instructions
- Getting Started - Quick start guides for beginners
Need Help?
If you encounter development issues:
- Check the troubleshooting section in each guide
- Review our Development FAQs
- Join our Discord Community for real-time help
- Open an issue on our GitHub repository
title: "IDE Setup for Navius Development" description: "Comprehensive guide for setting up and configuring development environments for Navius applications" category: "Guides" tags: ["development", "IDE", "tooling", "VS Code", "JetBrains", "debugging", "productivity"] last_updated: "April 7, 2025" version: "1.0"
IDE Setup for Navius Development
This guide provides detailed instructions for setting up and configuring your Integrated Development Environment (IDE) for optimal Navius development. Proper IDE configuration enhances productivity, ensures code quality, and provides essential debugging capabilities.
Table of Contents
- Recommended IDEs
- Visual Studio Code Setup
- JetBrains IDEs Setup
- Other IDEs
- IDE Extensions and Plugins
- Custom Configurations
- Troubleshooting
Recommended IDEs
Navius development works best with the following IDEs:
- Visual Studio Code - Free, lightweight, with excellent Rust support
- JetBrains CLion/IntelliJ IDEA - Full-featured IDEs with robust Rust integration
- Vim/Neovim - For developers who prefer terminal-based environments
Visual Studio Code Setup
Installation and Basic Setup
- Download and install VS Code from code.visualstudio.com
- Install the Rust extension pack:
- Open VS Code
- Go to Extensions (Ctrl+Shift+X or Cmd+Shift+X)
- Search for "Rust Extension Pack" and install it
Project Configuration
For optimal Navius development, copy the provided configuration files:
# Create .vscode directory if it doesn't exist
mkdir -p .vscode
# Copy recommended configuration files
cp .devtools/ide/vscode/* .vscode/
The configuration includes:
settings.json
- Optimized settings for Rust developmentlaunch.json
- Debug configurations for running Naviustasks.json
- Common tasks like build, test, and formattingextensions.json
- Recommended extensions
Essential Extensions
The following extensions are recommended for Navius development:
- rust-analyzer - Provides code completion, navigation, and inline errors
- CodeLLDB - Debugger for Rust code
- crates - Helps manage Rust dependencies
- Even Better TOML - TOML file support for configuration files
- GitLens - Enhanced Git integration
- SQL Tools - SQL support for database work
Debugging Configuration
VS Code's debugging capabilities work well with Navius. The provided launch.json
includes configurations for:
- Debug Navius Server - Run the main server with debugging
- Debug Unit Tests - Run all tests with debugging
- Debug Current File's Tests - Run tests for the currently open file
To start debugging:
- Set breakpoints by clicking in the gutter next to line numbers
- Press F5 or select a launch configuration from the Run panel
- Use the debug toolbar to step through code, inspect variables, and more
Custom Tasks
The provided tasks.json
includes useful tasks for Navius development:
- Build Navius - Build the project
- Run Tests - Run all tests
- Format Code - Format using rustfmt
- Check with Clippy - Run the Rust linter
To run a task:
- Press Ctrl+Shift+P (Cmd+Shift+P on macOS)
- Type "Run Task"
- Select the desired task
JetBrains IDEs Setup
Installation and Setup
- Download and install CLion or IntelliJ IDEA with the Rust plugin
- Install the Rust plugin if not already installed:
- Go to Settings/Preferences β Plugins
- Search for "Rust" and install the plugin
- Restart the IDE
Project Configuration
- Open the Navius project directory
- Configure the Rust toolchain:
- Go to Settings/Preferences β Languages & Frameworks β Rust
- Set the toolchain location to your rustup installation
- Enable external linter integration (Clippy)
Essential Plugins
The following plugins enhance the development experience:
- Rust - Core Rust language support
- Database Navigator - Database support for PostgreSQL
- EnvFile - Environment file support
- GitToolBox - Enhanced Git integration
Run Configurations
Create the following run configurations:
Navius Server Configuration
- Go to Run β Edit Configurations
- Click the + button and select "Cargo"
- Set the name to "Run Navius Server"
- Set Command to "run"
- Set Working directory to the project root
- Add the following environment variables:
RUST_LOG=debug
CONFIG_DIR=./config
RUN_ENV=development
Test Configuration
- Go to Run β Edit Configurations
- Click the + button and select "Cargo"
- Set the name to "Run Navius Tests"
- Set Command to "test"
- Set Working directory to the project root
Debugging
JetBrains IDEs provide robust debugging support:
- Set breakpoints by clicking in the gutter
- Start debugging by clicking the debug icon next to your run configuration
- Use the Debug tool window to inspect variables, evaluate expressions, and control execution flow
Other IDEs
Vim/Neovim
For Vim/Neovim users:
-
Install rust-analyzer language server:
rustup component add rust-analyzer
-
Configure a language server client like coc.nvim or built-in LSP for Neovim
-
Add the following to your Vim/Neovim configuration:
" For coc.nvim let g:coc_global_extensions = ['coc-rust-analyzer']
Or for Neovim's built-in LSP:
require('lspconfig').rust_analyzer.setup{ settings = { ["rust-analyzer"] = { assist = { importGranularity = "module", importPrefix = "self", }, cargo = { loadOutDirsFromCheck = true }, procMacro = { enable = true }, } } }
-
Install recommended plugins:
- vim-fugitive for Git integration
- fzf.vim for fuzzy finding
- tagbar for code navigation
IDE Extensions and Plugins
Productivity Enhancers
These extensions improve your development workflow:
Visual Studio Code
- Bookmarks - Mark lines and easily navigate between them
- Error Lens - Highlight errors and warnings inline
- Todo Tree - Track TODO comments in your codebase
JetBrains IDEs
- Key Promoter X - Learn keyboard shortcuts as you work
- Statistic - Track code statistics
- Rust Rover - Advanced Rust code navigation (for CLion)
Code Quality Tools
Extensions that help maintain code quality:
Visual Studio Code
- Error Lens - Enhanced error visibility
- Code Spell Checker - Catch typos in comments and strings
- Better Comments - Categorize comments by type
JetBrains IDEs
- SonarLint - Static code analysis
- Rainbow Brackets - Color-coded bracket pairs
- Clippy Annotations - View Clippy suggestions inline
Custom Configurations
Performance Optimization
For larger Navius projects, optimize your IDE performance:
Visual Studio Code
Add to your settings.json:
{
"rust-analyzer.cargo.features": ["all"],
"rust-analyzer.procMacro.enable": true,
"rust-analyzer.cargo.allFeatures": false,
"files.watcherExclude": {
"**/target/**": true
}
}
JetBrains IDEs
- Increase memory allocation:
- Help β Edit Custom VM Options
- Set
-Xmx4096m
(or higher based on available RAM)
- Exclude
target
directory from indexing:- Right-click target directory β Mark Directory as β Excluded
Theming for Readability
Recommended themes for Rust development:
Visual Studio Code
- One Dark Pro - Good contrast for Rust code
- GitHub Theme - Clean and readable
- Night Owl - Excellent for night coding sessions
JetBrains IDEs
- Darcula - Default dark theme with good Rust support
- Material Theme UI - Modern look with good color coding
- One Dark - Consistent coloring across code elements
Troubleshooting
Common Issues
Rust Analyzer Problems
- Issue: Rust Analyzer stops working or shows incorrect errors
- Solution:
- Restart the Rust Analyzer server
- Check that your
Cargo.toml
is valid - Run
cargo clean && cargo check
to rebuild project metadata
Debugging Fails
- Issue: Cannot hit breakpoints when debugging
- Solution:
- Ensure LLDB or GDB is properly installed
- Check that you're running a debug build (
cargo build
) - Verify launch configurations match your project structure
Performance Issues
- Issue: IDE becomes slow when working with Navius
- Solution:
- Exclude the
target
directory from indexing - Increase available memory for the IDE
- Disable unused plugins/extensions
- Exclude the
Contact Support
If you encounter persistent issues with your IDE setup:
- Check the Navius Developer Forum
- Submit an issue on the Navius GitHub Repository
- Join the Navius Discord Server for real-time support
Related Resources
- Development Environment Setup
- Navius Development Workflow
- Testing Guide
- Debugging Guide
- Official Rust Tools Documentation
title: "Git Workflow for Navius Development" description: "Best practices and guidelines for using Git effectively in Navius projects" category: "Guides" tags: ["development", "git", "version control", "collaboration", "branching", "commits"] last_updated: "April 7, 2025" version: "1.0"
Git Workflow for Navius Development
This guide outlines the recommended Git workflow and best practices for Navius projects. Following these guidelines ensures consistent version control, simplifies collaboration, and maintains a clean project history.
Table of Contents
- Git Configuration
- Branching Strategy
- Commit Guidelines
- Pull Requests and Code Review
- Integration and Deployment
- Advanced Git Techniques
- Troubleshooting
Git Configuration
Initial Setup
Configure your Git environment for Navius development:
# Set your identity
git config --global user.name "Your Name"
git config --global user.email "[email protected]"
# Configure line endings (important for cross-platform development)
# For macOS/Linux
git config --global core.autocrlf input
# For Windows
git config --global core.autocrlf true
# Enable helpful coloring
git config --global color.ui auto
# Set default branch to main
git config --global init.defaultBranch main
Navius-Specific Configuration
Add the recommended Git hooks for Navius development:
# Copy hooks from the repository
cp -r .devtools/git-hooks/* .git/hooks/
chmod +x .git/hooks/*
These hooks provide:
- Pre-commit formatting with rustfmt
- Pre-push checks with Clippy
- Commit message validation
Branching Strategy
Navius uses a simplified GitFlow branching strategy:
Main Branches
main
- The production-ready codedevelop
- Integration branch for features (when applicable for larger projects)
Supporting Branches
feature/*
- New features or enhancementsbugfix/*
- Bug fixeshotfix/*
- Urgent fixes for productionrelease/*
- Release preparation branchesdocs/*
- Documentation changesrefactor/*
- Code refactoring without changing functionalitytest/*
- Adding or modifying tests
Branch Naming Convention
Follow this naming convention for branches:
<type>/<issue-number>-<short-description>
Examples:
feature/123-add-user-authentication
bugfix/456-fix-database-connection
docs/789-update-api-documentation
Branch Lifecycle
-
Create a branch from the appropriate base:
# For features and most work, branch from main git checkout main git pull git checkout -b feature/123-add-user-authentication
-
Work on your branch:
# Make changes, commit frequently git add . git commit -m "Add login form component"
-
Keep your branch updated:
# Regularly pull and rebase from main git fetch origin git rebase origin/main
-
Complete your work and prepare for merge:
# Ensure tests pass cargo test # Push your branch git push -u origin feature/123-add-user-authentication
-
Create a pull request (on GitHub/GitLab)
-
After approval and merge, delete the branch:
git checkout main git pull git branch -d feature/123-add-user-authentication
Commit Guidelines
Commit Message Format
Navius follows the Conventional Commits specification. Each commit message should have this structure:
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
Where:
-
<type>
is one of:feat
: A new featurefix
: A bug fixdocs
: Documentation changesstyle
: Code style changes (formatting, indentation)refactor
: Code refactoringtest
: Adding or modifying testschore
: Changes to the build process, tools, etc.perf
: Performance improvements
-
<scope>
is optional and specifies the module affected (e.g.,auth
,api
,db
) -
<description>
is a concise summary of the change
Example:
feat(auth): implement JWT token validation
Add JWT validation middleware to protect API routes.
Includes token expiration checking and role-based verification.
Resolves: #123
Commit Best Practices
- Make frequent, small commits - Easier to review and understand
- One logical change per commit - Don't mix unrelated changes
- Write clear commit messages - Explain what and why, not how
- Reference issue numbers - Link commits to issues
- Ensure code compiles before committing - Don't break the build
Pull Requests and Code Review
Creating a Pull Request
-
Push your branch to the remote repository:
git push -u origin feature/123-add-user-authentication
-
Create a pull request using your project's Git hosting service (GitHub/GitLab)
-
Complete the pull request template with:
- A clear description of the changes
- Link to related issues
- Testing procedures
- Screenshots (if UI changes)
- Any deployment considerations
Pull Request Guidelines
- Keep PRs focused and small - Ideally under 500 lines of changes
- Complete the PR description thoroughly - Help reviewers understand your changes
- Self-review before requesting reviews - Check your own code first
- Respond promptly to review comments - Maintain momentum
- Rebase before merging - Keep history clean
Code Review Process
- Automated checks - CI must pass before review
- Reviewer assignment - At least one required reviewer
- Review feedback cycle - Address all comments
- Approval and merge - Squash or rebase merge preferred
Integration and Deployment
Continuous Integration
Navius uses GitHub Actions/GitLab CI for continuous integration. Every commit triggers:
- Compilation - Ensuring code builds
- Testing - Running unit and integration tests
- Linting - Checking code quality with Clippy
- Formatting - Verifying rustfmt compliance
Release Process
-
Version Bumping:
# Update version in Cargo.toml and other files cargo bump patch # or minor, major # Commit version bump git add . git commit -m "chore: bump version to 1.2.3"
-
Create a release tag:
git tag -a v1.2.3 -m "Release v1.2.3" git push origin v1.2.3
-
Create a release on GitHub/GitLab with release notes
Advanced Git Techniques
Useful Git Commands
# View branch history with graph
git log --graph --oneline --decorate
# Temporarily stash changes
git stash
git stash pop
# Find which commit introduced a bug
git bisect start
git bisect bad # current commit has the bug
git bisect good <commit-hash> # known good commit
# Show changes between commits
git diff <commit1>..<commit2>
# Amend last commit
git commit --amend
# Interactive rebase to clean history
git rebase -i HEAD~3 # rebase last 3 commits
Git Workflows for Specific Scenarios
Handling Merge Conflicts
-
Rebasing approach:
git fetch origin git rebase origin/main # Resolve conflicts git add . git rebase --continue
-
Merging approach:
git fetch origin git merge origin/main # Resolve conflicts git add . git commit
Cherry-picking Specific Commits
# Find the commit hash
git log
# Cherry-pick the commit
git cherry-pick <commit-hash>
Creating a Hotfix
# Branch from production tag
git checkout v1.2.3
git checkout -b hotfix/critical-security-fix
# Make changes, commit, and push
git add .
git commit -m "fix: address security vulnerability in auth"
git push -u origin hotfix/critical-security-fix
# After review and approval, merge to main and develop
Troubleshooting
Common Issues and Solutions
"Permission denied" when pushing
- Issue: SSH key not configured
- Solution:
# Generate SSH key ssh-keygen -t ed25519 -C "[email protected]" # Add to SSH agent eval "$(ssh-agent -s)" ssh-add ~/.ssh/id_ed25519 # Add public key to GitHub/GitLab cat ~/.ssh/id_ed25519.pub
Accidentally committed sensitive data
- Issue: Credentials or sensitive data committed
- Solution:
# Remove file from Git history git filter-branch --force --index-filter "git rm --cached --ignore-unmatch path/to/sensitive-file" --prune-empty --tag-name-filter cat -- --all # Force push git push origin --force --all # Update credentials/tokens immediately
Merge conflicts during rebase
- Issue: Complex conflicts during rebase
- Solution:
# Abort rebase if too complex git rebase --abort # Try merge instead git merge origin/main # Or use a visual merge tool git mergetool
Getting Help
If you're stuck with Git issues:
- Check the Navius Developer Forum
- Review Git documentation
- Ask in the #dev-help channel on the Navius Discord Server
Related Resources
- Development Workflow
- Code Review Process
- Contributing Guidelines
- IDE Setup Guide
- Official Git Documentation
title: "Testing Guide for Navius Development" description: "Comprehensive guide for writing and running tests in Navius applications" category: "Guides" tags: ["development", "testing", "quality assurance", "unit tests", "integration tests", "e2e tests"] last_updated: "April 7, 2025" version: "1.0"
Testing Guide for Navius Development
This guide provides comprehensive instructions for testing Navius applications. Quality testing ensures reliability, improves maintainability, and accelerates development by catching issues early.
Table of Contents
- Testing Philosophy
- Test Types and Structure
- Writing Effective Tests
- Test Organization
- Test Frameworks and Tools
- Running Tests
- Test Coverage
- Testing Best Practices
- Mocking and Test Doubles
- Continuous Integration
- Debugging Tests
Testing Philosophy
Navius follows these testing principles:
- Test Early, Test Often - Tests should be written alongside code development
- Test Isolation - Tests should be independent and not affect each other
- Test Readability - Tests serve as documentation and should be clear and understandable
- Speed Matters - The test suite should run quickly to enable frequent testing
- Risk-Based Testing - Focus more testing efforts on critical and complex components
Test Types and Structure
Unit Tests
Unit tests verify individual components in isolation. In Navius, unit tests are typically:
- Located in the same file as the code they're testing, in a
tests
module - Focused on a single function or method
- Fast to execute
- Don't require external resources
Example unit test:
#![allow(unused)] fn main() { // In src/utils/string_utils.rs pub fn capitalize(s: &str) -> String { if s.is_empty() { return String::new(); } let mut chars = s.chars(); match chars.next() { None => String::new(), Some(first) => first.to_uppercase().chain(chars).collect(), } } #[cfg(test)] mod tests { use super::*; #[test] fn test_capitalize_empty_string() { assert_eq!(capitalize(""), ""); } #[test] fn test_capitalize_single_letter() { assert_eq!(capitalize("a"), "A"); } #[test] fn test_capitalize_word() { assert_eq!(capitalize("hello"), "Hello"); } #[test] fn test_capitalize_already_capitalized() { assert_eq!(capitalize("Hello"), "Hello"); } } }
Integration Tests
Integration tests verify that different components work together correctly. In Navius:
- Located in the
tests/
directory at the project root - Test interactions between multiple components
- May use test databases or other isolated resources
- Focus on component interfaces and interactions
Example integration test:
#![allow(unused)] fn main() { // In tests/auth_integration_test.rs use navius::{ auth::{AuthService, User}, database::{self, Database}, }; use uuid::Uuid; #[tokio::test] async fn test_user_authentication_flow() { // Setup let db = database::get_test_database().await.unwrap(); let auth_service = AuthService::new(db.clone()); // Create test user let username = format!("test_user_{}", Uuid::new_v4()); let password = "secureP@ssw0rd"; // Register let user = auth_service.register(&username, password).await.unwrap(); assert_eq!(user.username, username); // Login let login_result = auth_service.login(&username, password).await.unwrap(); assert!(login_result.token.len() > 10); // Verify let user_id = user.id; let verified = auth_service.verify_token(&login_result.token).await.unwrap(); assert_eq!(verified.user_id, user_id); // Cleanup db.delete_user(user_id).await.unwrap(); } }
End-to-End Tests
E2E tests verify complete user flows through the system. In Navius:
- Located in the
tests/e2e/
directory - Test complete user flows and scenarios
- Often use browser automation tools like Selenium or Playwright
- Slower but provide high confidence in the system's correctness
Example E2E test (using Playwright for web UI testing):
// In tests/e2e/user_registration.spec.ts
import { test, expect } from '@playwright/test';
test.describe('User Registration Flow', () => {
test('should allow a new user to register and login', async ({ page }) => {
// Generate unique username
const username = `test_user_${Date.now()}`;
const password = 'SecureP@ss123';
// Visit registration page
await page.goto('/register');
// Fill and submit registration form
await page.fill('[data-testid="username-input"]', username);
await page.fill('[data-testid="password-input"]', password);
await page.fill('[data-testid="confirm-password-input"]', password);
await page.click('[data-testid="register-button"]');
// Verify successful registration
await expect(page).toHaveURL('/login');
// Login with new credentials
await page.fill('[data-testid="username-input"]', username);
await page.fill('[data-testid="password-input"]', password);
await page.click('[data-testid="login-button"]');
// Verify successful login
await expect(page).toHaveURL('/dashboard');
await expect(page.locator('[data-testid="user-greeting"]')).toContainText(username);
});
});
API Tests
API tests verify API endpoints. In Navius:
- Located in the
tests/api/
directory - Test API request/response cycles
- Validate response status, headers, and body
- Can use libraries like reqwest or testing frameworks like Postman
Example API test:
#![allow(unused)] fn main() { // In tests/api/user_api_test.rs use navius::setup_test_server; use reqwest::{Client, StatusCode}; use serde_json::{json, Value}; #[tokio::test] async fn test_user_creation_api() { // Start test server let server = setup_test_server().await; let client = Client::new(); let base_url = format!("http://localhost:{}", server.port()); // Create user request let response = client .post(&format!("{}/api/users", base_url)) .json(&json!({ "username": "api_test_user", "password": "P@ssw0rd123", "email": "[email protected]" })) .send() .await .unwrap(); // Verify response assert_eq!(response.status(), StatusCode::CREATED); let user: Value = response.json().await.unwrap(); assert_eq!(user["username"], "api_test_user"); assert_eq!(user["email"], "[email protected]"); assert!(user.get("password").is_none()); // Password should not be returned // Verify user was created by fetching it let get_response = client .get(&format!("{}/api/users/{}", base_url, user["id"])) .send() .await .unwrap(); assert_eq!(get_response.status(), StatusCode::OK); // Cleanup let delete_response = client .delete(&format!("{}/api/users/{}", base_url, user["id"])) .send() .await .unwrap(); assert_eq!(delete_response.status(), StatusCode::NO_CONTENT); } }
Writing Effective Tests
Test Structure
Follow the AAA (Arrange-Act-Assert) pattern for clear test structure:
#![allow(unused)] fn main() { #[test] fn test_user_validation() { // Arrange let user_input = UserInput { username: "user1", email: "invalid-email", password: "short", }; let validator = UserValidator::new(); // Act let validation_result = validator.validate(&user_input); // Assert assert!(!validation_result.is_valid); assert_eq!(validation_result.errors.len(), 2); assert!(validation_result.errors.contains(&ValidationError::InvalidEmail)); assert!(validation_result.errors.contains(&ValidationError::PasswordTooShort)); } }
Descriptive Test Names
Use descriptive test names that explain what is being tested and the expected outcome:
#![allow(unused)] fn main() { // Not descriptive #[test] fn test_user() { /* ... */ } // More descriptive #[test] fn test_user_with_invalid_email_should_fail_validation() { /* ... */ } }
Testing Edge Cases
Include tests for edge cases and boundary conditions:
#![allow(unused)] fn main() { #[test] fn test_pagination_with_zero_items() { /* ... */ } #[test] fn test_pagination_with_exactly_one_page() { /* ... */ } #[test] fn test_pagination_with_partial_last_page() { /* ... */ } #[test] fn test_pagination_with_max_page_size() { /* ... */ } }
Test Organization
Directory Structure
Navius follows this test organization:
navius/
βββ src/
β βββ module1/
β β βββ file1.rs (with unit tests)
β β βββ file2.rs (with unit tests)
β βββ module2/
β βββ file3.rs (with unit tests)
βββ tests/
β βββ integration/
β β βββ module1_test.rs
β β βββ module2_test.rs
β βββ api/
β β βββ endpoints_test.rs
β β βββ middleware_test.rs
β βββ e2e/
β β βββ user_flows_test.rs
β βββ common/
β βββ test_helpers.rs
Test Tagging
Use attributes to categorize and run specific test groups:
#![allow(unused)] fn main() { #[test] #[ignore = "slow test, run only in CI"] fn test_intensive_operation() { /* ... */ } #[test] #[cfg(feature = "extended-tests")] fn test_extended_feature() { /* ... */ } }
Test Frameworks and Tools
Core Testing Frameworks
- Rust's built-in test framework - For unit and integration tests
- tokio::test - For async testing
- Criterion - For benchmarking
- Playwright/Selenium - For E2E tests (frontend)
- reqwest - For API testing
Helper Libraries
- pretty_assertions - For improved assertion output
- mock_it - For mocking in Rust
- rstest - For parameterized tests
- test-case - For table-driven tests
- fake - For generating test data
Running Tests
Basic Test Execution
# Run all tests
cargo test
# Run tests in a specific file
cargo test --test auth_integration_test
# Run tests with a specific name pattern
cargo test user_validation
# Run ignored tests
cargo test -- --ignored
# Run a specific test
cargo test test_user_with_invalid_email_should_fail_validation
Test Configuration
Configure test behavior using environment variables or the .env.test
file:
# .env.test
TEST_DATABASE_URL=postgres://postgres:password@localhost:5432/navius_test
TEST_REDIS_URL=redis://localhost:6379/1
TEST_LOG_LEVEL=debug
Load these in your test setup:
#![allow(unused)] fn main() { use dotenv::dotenv; use std::env; fn setup() { dotenv::from_filename(".env.test").ok(); let db_url = env::var("TEST_DATABASE_URL").expect("TEST_DATABASE_URL must be set"); // Use db_url for test database connection } }
Test Coverage
Measuring Coverage
Navius uses grcov for test coverage:
# Install grcov
cargo install grcov
# Generate coverage report
CARGO_INCREMENTAL=0 RUSTFLAGS='-Cinstrument-coverage' LLVM_PROFILE_FILE='cargo-test-%p-%m.profraw' cargo test
grcov . --binary-path ./target/debug/ -s . -t html --branch --ignore-not-existing -o ./coverage/
Coverage Targets
- Minimum coverage targets:
- 80% line coverage for business logic
- 70% branch coverage for business logic
- 60% line coverage for infrastructure code
Testing Best Practices
Do's:
- β Write tests before or alongside code (TDD/BDD when possible)
- β Keep tests independent and isolated
- β Use meaningful test data
- β Test failure cases, not just success paths
- β Run tests frequently during development
- β Test public interfaces rather than implementation details
- β Clean up test resources (connections, files, etc.)
Don'ts:
- β Don't skip testing error conditions
- β Don't use random data without controlling the seed
- β Don't write tests that depend on execution order
- β Don't test trivial code (e.g., getters/setters)
- β Don't write overly complex tests
- β Don't include external services in unit tests
Mocking and Test Doubles
Types of Test Doubles
- Stubs - Return predefined responses
- Mocks - Verify expected interactions
- Fakes - Working implementations for testing only
- Spies - Record calls for later verification
Mocking in Rust
Example using the mockall
crate:
#![allow(unused)] fn main() { use mockall::{automock, predicate::*}; #[automock] trait Database { fn get_user(&self, id: u64) -> Option<User>; fn save_user(&self, user: &User) -> Result<(), DbError>; } #[test] fn test_user_service_with_mock_db() { let mut mock_db = MockDatabase::new(); // Setup expectations mock_db.expect_get_user() .with(predicate::eq(42)) .times(1) .returning(|_| Some(User { id: 42, name: "Test User".to_string() })); // Create service with mock let user_service = UserService::new(Box::new(mock_db)); // Test the service let user = user_service.get_user(42).unwrap(); assert_eq!(user.name, "Test User"); } }
Creating Test Fakes
For complex dependencies, create fake implementations:
#![allow(unused)] fn main() { // A fake in-memory database for testing struct InMemoryDatabase { users: std::sync::Mutex<HashMap<u64, User>>, } impl InMemoryDatabase { fn new() -> Self { Self { users: std::sync::Mutex::new(HashMap::new()), } } } impl Database for InMemoryDatabase { fn get_user(&self, id: u64) -> Option<User> { self.users.lock().unwrap().get(&id).cloned() } fn save_user(&self, user: &User) -> Result<(), DbError> { self.users.lock().unwrap().insert(user.id, user.clone()); Ok(()) } } }
Continuous Integration
CI Test Configuration
Navius uses GitHub Actions for CI testing:
# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: navius_test
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:6
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- name: Set up Rust
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Run tests
run: cargo test --all-features
env:
TEST_DATABASE_URL: postgres://postgres:postgres@localhost:5432/navius_test
TEST_REDIS_URL: redis://localhost:6379/1
- name: Generate coverage
run: |
cargo install grcov
CARGO_INCREMENTAL=0 RUSTFLAGS='-Cinstrument-coverage' LLVM_PROFILE_FILE='cargo-test-%p-%m.profraw' cargo test
grcov . --binary-path ./target/debug/ -s . -t lcov --branch --ignore-not-existing -o ./lcov.info
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
file: ./lcov.info
fail_ci_if_error: false
Debugging Tests
Tips for Debugging Tests
-
Use test-specific logging:
#![allow(unused)] fn main() { #[test] fn test_complex_operation() { let _ = env_logger::builder().is_test(true).try_init(); debug!("Starting test with parameters: {:?}", test_params); // Test code... } }
-
Run single tests with verbose output:
RUST_LOG=debug cargo test test_name -- --nocapture
-
Use the debugger: Configure your IDE to debug tests, set breakpoints, and step through code.
-
Add more detailed assertions:
#![allow(unused)] fn main() { // Instead of assert_eq!(result, expected); // Use more descriptive assertions assert_eq!( result, expected, "Result {:?} doesn't match expected {:?} when processing input {:?}", result, expected, input ); }
Common Test Failures
- Failing Async Tests: Ensure your runtime is properly set up and test futures are awaited
- Flaky Tests: Look for race conditions or external dependencies
- Timeout Issues: Check for blocking operations in async contexts
- Resource Leaks: Ensure proper cleanup after tests
Related Resources
- Official Rust Test Documentation
- Navius Testing Templates
- Code Coverage Reports
- Testing Guidelines
- Debugging Guide
title: "Debugging Guide for Navius Development" description: "Comprehensive techniques and tools for debugging Navius applications" category: "Guides" tags: ["development", "debugging", "troubleshooting", "logging", "performance", "rust"] last_updated: "April 7, 2025" version: "1.0"
Debugging Guide for Navius Development
This guide provides comprehensive instructions and best practices for debugging Navius applications. Effective debugging is essential for maintaining code quality and resolving issues efficiently.
Table of Contents
- Debugging Philosophy
- Common Debugging Scenarios
- Debugging Tools
- Logging and Tracing
- Rust-Specific Debugging Techniques
- Database Debugging
- API Debugging
- Performance Debugging
- Advanced Debugging Scenarios
- Debugging in Production
Debugging Philosophy
Effective debugging in Navius development follows these principles:
- Reproduce First - Create a reliable reproduction case before attempting to fix an issue
- Isolate the Problem - Narrow down the scope of the issue
- Data-Driven Approach - Use facts, logs, and evidence rather than guesswork
- Systematic Investigation - Follow a methodical process rather than random changes
- Root Cause Analysis - Fix the underlying cause, not just the symptoms
Common Debugging Scenarios
Application Crashes
When your Navius application crashes:
- Check the Stack Trace - Identify where the crash occurred
- Examine Error Messages - Parse logs for error details
- Reproduce the Crash - Create a minimal test case
- Check for Resource Issues - Verify memory usage and system resources
- Review Recent Changes - Consider what code changed recently
Example stack trace analysis:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: DatabaseError { kind: ConnectionError, cause: Some("connection refused") }', src/services/user_service.rs:52:10
stack backtrace:
0: std::panicking::begin_panic_handler
1: std::panicking::panic_handler
2: core::panicking::panic_fmt
3: core::result::unwrap_failed
4: navius::services::user_service::UserService::find_by_id
5: navius::handlers::user_handlers::get_user
6: navius::main
This indicates:
- The crash is in
user_service.rs
line 52 - It's unwrapping a database connection error
- The connection is being refused
Runtime Errors
For non-crash errors (incorrect behavior):
- Identify the Expected vs. Actual Behavior
- Use Logging to Track Flow
- Create Unit Tests to reproduce and verify the issue
- Use Debugger Breakpoints at key decision points
Build Errors
For build failures:
- Read Compiler Messages Carefully - Rust provides detailed error messages
- Check Dependencies - Verify Cargo.toml and dependency versions
- Use Tools - Clippy can identify additional issues
- Clean and Rebuild -
cargo clean && cargo build
Debugging Tests
For test failures:
- Run Single Test - Focus on one test with
cargo test test_name
- Use
--nocapture
- See output withcargo test -- --nocapture
- Add Debugging Prints - Temporarily add print statements
- Use Test-Specific Logs - Enable debug logging during tests
Debugging Tools
IDE Debuggers
Visual Studio Code
- Setup configuration in
.vscode/launch.json
:
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug Navius Server",
"cargo": {
"args": ["build", "--bin=navius"],
"filter": {
"name": "navius",
"kind": "bin"
}
},
"args": [],
"cwd": "${workspaceFolder}",
"env": {
"RUST_LOG": "debug",
"CONFIG_DIR": "./config",
"RUN_ENV": "development"
}
},
{
"type": "lldb",
"request": "launch",
"name": "Debug Unit Tests",
"cargo": {
"args": ["test", "--no-run"],
"filter": {
"name": "navius",
"kind": "lib"
}
},
"args": [],
"cwd": "${workspaceFolder}"
}
]
}
- Set breakpoints by clicking in the gutter
- Start debugging by pressing F5 or using the Debug menu
- Use the Debug panel to:
- Step through code (F10)
- Step into functions (F11)
- View variables and their values
- Evaluate expressions in the Debug Console
JetBrains IDEs (CLion/IntelliJ with Rust plugin)
-
Create Run/Debug configurations for:
- Main application
- Specific test files
- All tests
-
Debugging features to use:
- Expression evaluation
- Memory view
- Smart step-into
- Conditional breakpoints
Command Line Debugging
For environments without IDE support, use:
-
LLDB/GDB:
# Build with debug symbols cargo build # Start debugger lldb ./target/debug/navius # Set breakpoints breakpoint set --file user_service.rs --line 52 # Run program run # After hitting breakpoint frame variable # Show variables in current frame thread backtrace # Show current stack expression user.id # Evaluate expression
-
cargo-lldb:
cargo install cargo-lldb cargo lldb --bin navius
Specialized Debugging Tools
-
Memory Analysis:
- Valgrind for memory leaks:
valgrind --leak-check=full ./target/debug/navius
- ASAN (Address Sanitizer): Build with
-Z sanitizer=address
- Valgrind for memory leaks:
-
Thread Analysis:
- Inspect thread states:
ps -T -p <PID>
- Thread contention:
perf record -g -p <PID>
- Inspect thread states:
-
Network Debugging:
- Wireshark for packet analysis
tcpdump
for network traffic capturecurl
for API request testing
Logging and Tracing
Structured Logging
Navius uses the tracing
crate for structured logging:
#![allow(unused)] fn main() { use tracing::{debug, error, info, instrument, warn}; #[instrument(skip(password))] pub async fn authenticate_user(username: &str, password: &str) -> Result<User, AuthError> { debug!("Attempting to authenticate user: {}", username); match user_repository.find_by_username(username).await { Ok(user) => { if verify_password(password, &user.password_hash) { info!("User authenticated successfully: {}", username); Ok(user) } else { warn!("Failed authentication attempt for user: {}", username); Err(AuthError::InvalidCredentials) } } Err(e) => { error!(error = ?e, "Database error during authentication"); Err(AuthError::DatabaseError(e)) } } } }
Log Levels
Use appropriate log levels:
- ERROR: Application errors requiring immediate attention
- WARN: Unexpected situations that don't cause application failure
- INFO: Important events for operational insights
- DEBUG: Detailed information useful for debugging
- TRACE: Very detailed information, typically for pinpointing issues
Configuring Logging
Set via environment variables:
# Set log level
export RUST_LOG=navius=debug,warp=info
# Log to file
export RUST_LOG_STYLE=always
export RUST_LOG_FILE=/var/log/navius.log
Or in code:
#![allow(unused)] fn main() { use tracing_subscriber::{self, fmt::format::FmtSpan, EnvFilter}; fn setup_logging() { let filter = EnvFilter::try_from_default_env() .unwrap_or_else(|_| EnvFilter::new("navius=info,warp=warn")); tracing_subscriber::fmt() .with_env_filter(filter) .with_span_events(FmtSpan::CLOSE) .with_file(true) .with_line_number(true) .init(); } }
Log Analysis
For analyzing logs:
-
Search with grep/ripgrep:
rg "error|exception" navius.log
-
Context with before/after lines:
rg -A 5 -B 2 "DatabaseError" navius.log
-
Filter by time period:
rg "2023-04-07T14:[0-5]" navius.log
-
Count occurrences:
rg -c "AUTH_FAILED" navius.log
Rust-Specific Debugging Techniques
Debug Prints
Use dbg!
macro for quick debugging:
#![allow(unused)] fn main() { // Instead of let result = complex_calculation(x, y); println!("Result: {:?}", result); // Use dbg! to show file/line and expression let result = dbg!(complex_calculation(x, y)); }
Unwrap Alternatives
Replace unwrap()
and expect()
with better error handling:
#![allow(unused)] fn main() { // Instead of let user = db.find_user(id).unwrap(); // Use more descriptive handling let user = db.find_user(id) .map_err(|e| { error!("Failed to retrieve user {}: {:?}", id, e); e })?; }
Narrowing Down Rust Compiler Errors
For complex compile errors:
- Binary Search - Comment out sections of code until error disappears
- Type Annotations - Add explicit type annotations to clarify issues
- Minimal Example - Create a minimal failing example
- Check Versions - Verify dependency versions for compatibility
Debugging Async Code
Async code can be challenging to debug:
-
Instrument async functions:
#![allow(unused)] fn main() { #[instrument(skip(request))] async fn handle_request(request: Request) -> Response { // ... } }
-
Use
output_span_events
to trace async execution:#![allow(unused)] fn main() { tracing_subscriber::fmt() .with_span_events(FmtSpan::NEW | FmtSpan::CLOSE) .init(); }
-
Inspect tasks:
#![allow(unused)] fn main() { tokio::spawn(async move { let span = tracing::info_span!("worker_task", id = %task_id); let _guard = span.enter(); // task code... }); }
Memory Analysis
For memory issues:
-
Check for leaks with
Drop
trait:#![allow(unused)] fn main() { impl Drop for MyResource { fn drop(&mut self) { debug!("MyResource being dropped: {:?}", self.id); } } }
-
Use weak references where appropriate:
#![allow(unused)] fn main() { use std::rc::{Rc, Weak}; use std::cell::RefCell; struct Parent { children: Vec<Rc<RefCell<Child>>>, } struct Child { parent: Weak<RefCell<Parent>>, } }
Database Debugging
Query Analysis
For slow or problematic database queries:
-
Query Logging - Enable PostgreSQL query logging:
# In postgresql.conf log_min_duration_statement = 100 # Log queries taking > 100ms
-
Query Explain - Use EXPLAIN ANALYZE:
EXPLAIN ANALYZE SELECT * FROM users WHERE email LIKE '%example.com';
-
Check Indexes - Verify appropriate indexes exist:
SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'users';
Connection Issues
For database connection problems:
-
Connection Pool Diagnostics:
#![allow(unused)] fn main() { // Log connection pool status info!( "DB Pool: active={}, idle={}, size={}", pool.status().await.active, pool.status().await.idle, pool.status().await.size ); }
-
Check Connection Parameters:
#![allow(unused)] fn main() { let conn_params = PgConnectOptions::new() .host(&config.db_host) .port(config.db_port) .username(&config.db_user) .password(&config.db_password) .database(&config.db_name); debug!("Connection parameters: {:?}", conn_params); }
-
Manual Connection Test:
PGPASSWORD=your_password psql -h hostname -U username -d database -c "\conninfo"
API Debugging
Request/Response Logging
For API debugging:
-
Add request/response middleware:
#![allow(unused)] fn main() { async fn log_request_response( req: Request, next: Next, ) -> Result<impl IntoResponse, (StatusCode, String)> { let path = req.uri().path().to_string(); let method = req.method().clone(); let req_id = Uuid::new_v4(); let start = std::time::Instant::now(); info!(request_id = %req_id, %method, %path, "Request received"); let response = next.run(req).await; let status = response.status(); let duration = start.elapsed(); info!( request_id = %req_id, %method, %path, status = %status.as_u16(), duration_ms = %duration.as_millis(), "Response sent" ); Ok(response) } }
-
API Testing Tools:
- Use Postman or Insomnia for manual API testing
- Create collections for common request scenarios
- Save environments for different setups (dev, test, prod)
-
Curl for quick tests:
curl -v -X POST http://localhost:3000/api/users \ -H "Content-Type: application/json" \ -d '{"username":"test", "password":"test123"}'
Performance Debugging
Identifying Performance Issues
-
Profiling with
flamegraph
:cargo install flamegraph CARGO_PROFILE_RELEASE_DEBUG=true cargo flamegraph --bin navius
-
Benchmarking with Criterion:
#![allow(unused)] fn main() { use criterion::{black_box, criterion_group, criterion_main, Criterion}; fn benchmark_user_service(c: &mut Criterion) { let service = UserService::new(/* dependencies */); c.bench_function("find_user_by_id", |b| { b.iter(|| service.find_by_id(black_box(1))) }); } criterion_group!(benches, benchmark_user_service); criterion_main!(benches); }
-
Request Timing:
#![allow(unused)] fn main() { async fn handle_request() -> Response { let timer = std::time::Instant::now(); // Handle request... let duration = timer.elapsed(); info!("Request processed in {}ms", duration.as_millis()); // Return response... } }
Common Performance Issues
-
N+1 Query Problem:
- Symptom: Multiple sequential database queries
- Solution: Use joins or batch fetching
-
Missing Indexes:
- Symptom: Slow queries with table scans
- Solution: Add appropriate indexes
-
Blocking Operations in Async Context:
- Symptom: High latency, thread pool exhaustion
- Solution: Move blocking operations to blocking task pool
#![allow(unused)] fn main() { let result = tokio::task::spawn_blocking(move || { // CPU-intensive or blocking operation expensive_calculation() }).await?; }
-
Memory Leaks:
- Symptom: Growing memory usage over time
- Solution: Check for unclosed resources, circular references
Advanced Debugging Scenarios
Race Conditions
For debugging concurrency issues:
-
Add Tracing for Async Operations:
#![allow(unused)] fn main() { #[instrument(skip(data))] async fn process_data(id: u64, data: Vec<u8>) { info!("Starting processing"); // Processing code... info!("Finished processing"); } }
-
Use Atomic Operations:
#![allow(unused)] fn main() { use std::sync::atomic::{AtomicUsize, Ordering}; static COUNTER: AtomicUsize = AtomicUsize::new(0); fn increment_counter() { let prev = COUNTER.fetch_add(1, Ordering::SeqCst); debug!("Counter incremented from {} to {}", prev, prev + 1); } }
-
Debugging Deadlocks:
- Add timeout to lock acquisitions
- Log lock acquisition/release
- Use deadlock detection in development
Memory Corruption
For possible memory corruption:
-
Use Address Sanitizer:
RUSTFLAGS="-Z sanitizer=address" cargo test
-
Check Unsafe Code:
- Review all
unsafe
blocks - Verify pointer safety
- Check lifetime correctness
- Review all
-
Foreign Function Interface Issues:
- Verify signature matches
- Check data marshaling
- Ensure proper resource cleanup
Debugging in Production
Safe Production Debugging
-
Structured Logging:
- Use context-rich structured logs
- Include correlation IDs for request tracing
- Log adequate information without sensitive data
-
Metrics and Monitoring:
- Track key performance indicators
- Set up alerts for anomalies
- Use distributed tracing for complex systems
-
Feature Flags:
- Enable additional logging in production for specific issues
#![allow(unused)] fn main() { if feature_flags.is_enabled("enhanced_auth_logging") { debug!("Enhanced auth logging: {:?}", auth_details); } }
Post-Mortem Analysis
For analyzing production issues after they occur:
-
Log Aggregation:
- Collect logs centrally
- Use tools like ELK Stack or Grafana Loki
- Create dashboards for common issues
-
Error Tracking:
- Integrate with error tracking services
- Group similar errors
- Track error rates and trends
-
Core Dumps:
- Enable core dumps in production
- Secure sensitive information
- Analyze with
rust-gdb
Related Resources
- Rust Debugging with LLDB/GDB
- Effective Error Handling in Rust
- Navius Logging Setup
- Testing Guide
- IDE Setup for Debugging
- PostgreSQL Performance Tuning
title: Feature Guides description: "Comprehensive guides for implementing specific features in Navius applications, including authentication, database integration, API integration, and more" category: guides tags:
- features
- authentication
- api
- database
- integration
- security
- performance
- caching
- customization related:
- ../README.md
- ../../reference/api/README.md
- ../../reference/architecture/principles.md
- ../../reference/patterns/caching-patterns.md last_updated: March 27, 2025 version: 1.1
Feature Guides
This section contains detailed guides for implementing specific features in your Navius applications. Each guide provides step-by-step instructions, best practices, and examples for adding functionality to your projects.
Getting Started
For most applications, we recommend implementing features in this order:
- Database Access - Set up your data layer with PostgreSQL
- Authentication - Implement secure user authentication
- Redis Caching - Add basic caching for performance optimization
- Advanced Caching Strategies - Implement two-tier caching
- API Integration - Connect with external services
- Server Customization - Optimize your deployment with feature selection
Available Guides
Authentication and Security
- Authentication Guide - Implement secure authentication using Microsoft Entra and session management
- Security Best Practices - Essential security measures for Navius applications
Data and Storage
- PostgreSQL Integration - Database integration with PostgreSQL and AWS RDS
- Redis Caching - Implement basic caching with Redis
- Advanced Caching Strategies - Implement two-tier caching with memory and Redis fallback
API and Integration
- API Integration - Connect and integrate with external APIs
- WebSocket Support - Implement real-time communication
- API Design Best Practices - Guidelines for designing robust APIs
Server Customization
- Server Customization CLI - Use the CLI tool to create optimized server builds
- Feature Selection Best Practices - Practical examples for server customization
Implementation Guidelines
When implementing features:
- Security First: Always follow security best practices outlined in the authentication and security guides
- Performance: Consider caching strategies and database optimization
- Testing: Write comprehensive tests for new features
- Documentation: Update relevant documentation when adding features
- Optimization: Use the Server Customization System to create lean, optimized builds
Prerequisites for All Features
- Basic understanding of Rust and async programming
- Navius development environment set up
- Access to necessary external services (databases, APIs, etc.)
- Understanding of Architecture Principles
Related Resources
- API Reference - Technical API documentation
- Architecture Principles - Core architectural concepts
- Configuration Guide - Environment and configuration setup
- Cache Configuration - Configuring the caching system
- Feature Configuration - Configuring the Server Customization System
- Caching Patterns - Technical reference for caching strategies
- Deployment Guide - Production deployment instructions
- Two-Tier Cache Example - Code examples for implementing two-tier caching
- Server Customization Example - Code examples for server customization
Need Help?
If you encounter issues while implementing features:
- Check the troubleshooting section in each guide
- Review the Common Issues documentation
- Join our Discord Community for real-time help
- Open an issue on our GitHub repository
Caching
- Basic Caching Guide - Introduction to caching with Redis and in-memory options
- Advanced Caching Strategies - Implementing the Two-Tier Cache and advanced patterns
Server Customization
- Server Customization CLI - Using the feature selection CLI to optimize server deployments
- Feature System Overview - Understanding the Server Customization System
title: "Navius Authentication Guide" description: "A comprehensive guide to implementing secure authentication in Navius applications, including Microsoft Entra integration, session management with Redis, and security best practices" category: guides tags:
- authentication
- security
- microsoft-entra
- redis
- session-management
- oauth2
- jwt related:
- ../reference/api/authentication-api.md
- ../guides/features/api-integration.md
- ../reference/configuration/environment-variables.md
- ../guides/deployment/security-checklist.md last_updated: March 27, 2025 version: 1.0
Navius Authentication Guide
This guide covers the authentication options available in Navius and how to implement them in your application.
Overview
Navius provides several authentication methods out of the box:
- JWT-based authentication
- OAuth2 integration
- API key authentication
- Microsoft Entra (formerly Azure AD) integration
- Custom authentication schemes
Each method can be configured and combined to suit your application's needs.
JWT Authentication
JWT (JSON Web Token) authentication is the default method in Navius.
Configuration
Configure JWT authentication in your config.yaml
file:
auth:
jwt:
enabled: true
secret_key: "${JWT_SECRET}"
algorithm: "HS256"
token_expiration_minutes: 60
refresh_token_expiration_days: 7
issuer: "naviusframework.dev"
Implementation
- Login endpoint:
#![allow(unused)] fn main() { #[post("/login")] pub async fn login( State(state): State<AppState>, Json(credentials): Json<LoginCredentials>, ) -> Result<Json<AuthResponse>, AppError> { // Validate credentials let user = state.user_service.authenticate( &credentials.username, &credentials.password, ).await?; // Generate JWT token let token = state.auth_service.generate_token(&user)?; let refresh_token = state.auth_service.generate_refresh_token(&user)?; Ok(Json(AuthResponse { access_token: token, refresh_token, token_type: "Bearer".to_string(), expires_in: 3600, })) } }
- Protect routes with middleware:
#![allow(unused)] fn main() { // In your router setup let protected_routes = Router::new() .route("/users", get(list_users)) .route("/users/:id", get(get_user_by_id)) .layer(JwtAuthLayer::new( state.config.auth.jwt.secret_key.clone(), )); }
- Access the authenticated user:
#![allow(unused)] fn main() { #[get("/profile")] pub async fn get_profile( auth_user: AuthUser, State(state): State<AppState>, ) -> Result<Json<User>, AppError> { let user = state.user_service.get_by_id(auth_user.id).await?; Ok(Json(user)) } }
OAuth2 Authentication
Navius supports OAuth2 integration with various providers.
Configuration
auth:
oauth2:
enabled: true
providers:
google:
enabled: true
client_id: "${GOOGLE_CLIENT_ID}"
client_secret: "${GOOGLE_CLIENT_SECRET}"
redirect_uri: "https://your-app.com/auth/google/callback"
scopes:
- "email"
- "profile"
github:
enabled: true
client_id: "${GITHUB_CLIENT_ID}"
client_secret: "${GITHUB_CLIENT_SECRET}"
redirect_uri: "https://your-app.com/auth/github/callback"
scopes:
- "user:email"
Implementation
- Add OAuth2 routes:
#![allow(unused)] fn main() { // In your router setup let auth_routes = Router::new() .route("/auth/google/login", get(google_login)) .route("/auth/google/callback", get(google_callback)) .route("/auth/github/login", get(github_login)) .route("/auth/github/callback", get(github_callback)); }
- Create provider-specific handlers:
#![allow(unused)] fn main() { #[get("/auth/google/login")] pub async fn google_login( State(state): State<AppState>, ) -> impl IntoResponse { let oauth_client = state.oauth_service.get_provider("google"); let (auth_url, csrf_token) = oauth_client.authorize_url(); // Store CSRF token in cookie let cookie = Cookie::build("oauth_csrf", csrf_token.secret().clone()) .path("/") .max_age(time::Duration::minutes(10)) .http_only(true) .secure(true) .finish(); ( StatusCode::FOUND, [(header::SET_COOKIE, cookie.to_string())], [(header::LOCATION, auth_url.to_string())], ) } #[get("/auth/google/callback")] pub async fn google_callback( Query(params): Query<OAuthCallbackParams>, cookies: Cookies, State(state): State<AppState>, ) -> Result<impl IntoResponse, AppError> { // Validate CSRF token let csrf_cookie = cookies.get("oauth_csrf") .ok_or(AppError::AuthenticationError("Missing CSRF token".into()))?; let oauth_client = state.oauth_service.get_provider("google"); let token = oauth_client.exchange_code( params.code, csrf_cookie.value(), ).await?; // Get user info let user_info = oauth_client.get_user_info(&token).await?; // Find or create user let user = state.user_service.find_or_create_from_oauth( "google", &user_info.id, &user_info.email, &user_info.name, ).await?; // Generate JWT token let jwt_token = state.auth_service.generate_token(&user)?; // Redirect to frontend with token Ok(( StatusCode::FOUND, [(header::LOCATION, format!("/auth/success?token={}", jwt_token))], )) } }
API Key Authentication
For service-to-service or programmatic API access, Navius provides API key authentication.
Configuration
auth:
api_key:
enabled: true
header_name: "X-API-Key"
query_param_name: "api_key"
Implementation
- Create API keys:
#![allow(unused)] fn main() { #[post("/api-keys")] pub async fn create_api_key( auth_user: AuthUser, State(state): State<AppState>, Json(payload): Json<CreateApiKeyRequest>, ) -> Result<Json<ApiKey>, AppError> { // Ensure user has permission if !auth_user.has_permission("api_keys:create") { return Err(AppError::PermissionDenied); } // Create API key let api_key = state.api_key_service.create( auth_user.id, &payload.name, payload.expiration_days, ).await?; Ok(Json(api_key)) } }
- Apply API key middleware:
#![allow(unused)] fn main() { // In your router setup let api_routes = Router::new() .route("/api/v1/data", get(get_data)) .layer(ApiKeyLayer::new(state.api_key_service.clone())); }
Microsoft Entra (Azure AD) Integration
Navius provides specialized support for Microsoft Entra ID integration.
Configuration
auth:
microsoft_entra:
enabled: true
tenant_id: "${AZURE_TENANT_ID}"
client_id: "${AZURE_CLIENT_ID}"
client_secret: "${AZURE_CLIENT_SECRET}"
redirect_uri: "https://your-app.com/auth/microsoft/callback"
scopes:
- "openid"
- "profile"
- "email"
graph_api:
enabled: true
scopes:
- "User.Read"
Implementation
- Microsoft login routes:
#![allow(unused)] fn main() { #[get("/auth/microsoft/login")] pub async fn microsoft_login( State(state): State<AppState>, ) -> impl IntoResponse { let auth_url = state.microsoft_auth_service.get_authorization_url(); ( StatusCode::FOUND, [(header::LOCATION, auth_url.to_string())], ) } #[get("/auth/microsoft/callback")] pub async fn microsoft_callback( Query(params): Query<MicrosoftAuthCallbackParams>, State(state): State<AppState>, ) -> Result<impl IntoResponse, AppError> { // Exchange authorization code for token let token = state.microsoft_auth_service .exchange_code_for_token(¶ms.code) .await?; // Get user info from Microsoft Graph API let user_info = state.microsoft_auth_service .get_user_info(&token) .await?; // Find or create user let user = state.user_service .find_or_create_from_microsoft(&user_info) .await?; // Generate JWT token let jwt_token = state.auth_service.generate_token(&user)?; // Redirect to frontend with token Ok(( StatusCode::FOUND, [(header::LOCATION, format!("/auth/success?token={}", jwt_token))], )) } }
Custom Authentication Schemes
For specialized authentication needs, Navius allows implementing custom authentication schemes.
Implementation
- Create a custom extractor:
#![allow(unused)] fn main() { pub struct CustomAuthUser { pub id: Uuid, pub username: String, pub roles: Vec<String>, } #[async_trait] impl FromRequestParts<AppState> for CustomAuthUser { type Rejection = AppError; async fn from_request_parts( parts: &mut Parts, state: &AppState, ) -> Result<Self, Self::Rejection> { // Custom authentication logic let auth_header = parts .headers .get(header::AUTHORIZATION) .ok_or(AppError::Unauthorized("Missing authentication".into()))?; // Parse and validate the header let header_value = auth_header.to_str()?; // Your custom validation logic if !header_value.starts_with("Custom ") { return Err(AppError::Unauthorized("Invalid auth scheme".into())); } let token = header_value[7..].to_string(); // Validate token and get user let user = state.custom_auth_service.validate_token(&token).await?; Ok(CustomAuthUser { id: user.id, username: user.username, roles: user.roles, }) } } }
- Use the custom extractor in handlers:
#![allow(unused)] fn main() { #[get("/custom-auth-resource")] pub async fn get_protected_resource( auth: CustomAuthUser, ) -> Result<Json<Resource>, AppError> { // Use auth.id, auth.username, auth.roles // ... Ok(Json(resource)) } }
Role-Based Access Control
Navius provides a built-in RBAC system that integrates with all authentication methods.
Configuration
auth:
rbac:
enabled: true
default_role: "user"
roles:
admin:
permissions:
- "users:read"
- "users:write"
- "settings:read"
- "settings:write"
user:
permissions:
- "users:read:self"
- "settings:read:self"
- "settings:write:self"
Implementation
- Check permissions in handlers:
#![allow(unused)] fn main() { #[get("/admin/settings")] pub async fn admin_settings( auth_user: AuthUser, ) -> Result<Json<Settings>, AppError> { // Check if user has the required permission if !auth_user.has_permission("settings:read") { return Err(AppError::PermissionDenied); } // Proceed with handler logic // ... Ok(Json(settings)) } }
- Or use permission middleware:
#![allow(unused)] fn main() { // In your router setup let admin_routes = Router::new() .route("/admin/users", get(list_users)) .route("/admin/settings", get(admin_settings)) .layer(RequirePermissionLayer::new("admin:access")); }
Multi-Factor Authentication
Navius supports multi-factor authentication (MFA) via TOTP (Time-based One-Time Password).
Configuration
auth:
mfa:
enabled: true
totp:
enabled: true
issuer: "Navius App"
digits: 6
period: 30
Implementation
- Enable MFA for a user:
#![allow(unused)] fn main() { #[post("/mfa/enable")] pub async fn enable_mfa( auth_user: AuthUser, State(state): State<AppState>, ) -> Result<Json<MfaSetupResponse>, AppError> { // Generate TOTP secret let (secret, qr_code) = state.mfa_service.generate_totp_secret( &auth_user.username, )?; // Store secret temporarily (not yet activated) state.mfa_service.store_pending_secret(auth_user.id, &secret).await?; Ok(Json(MfaSetupResponse { secret, qr_code, })) } #[post("/mfa/verify")] pub async fn verify_mfa( auth_user: AuthUser, State(state): State<AppState>, Json(payload): Json<VerifyMfaRequest>, ) -> Result<Json<MfaVerifyResponse>, AppError> { // Verify TOTP code and activate MFA state.mfa_service .verify_and_activate(auth_user.id, &payload.code) .await?; Ok(Json(MfaVerifyResponse { enabled: true, })) } }
- Login with MFA:
#![allow(unused)] fn main() { #[post("/login")] pub async fn login( State(state): State<AppState>, Json(credentials): Json<LoginCredentials>, ) -> Result<Json<AuthResponse>, AppError> { // Validate credentials let user = state.user_service.authenticate( &credentials.username, &credentials.password, ).await?; // Check if MFA is required if user.mfa_enabled { // Return challenge response return Ok(Json(AuthResponse { requires_mfa: true, mfa_token: state.auth_service.generate_mfa_token(user.id)?, ..Default::default() })); } // Generate JWT token let token = state.auth_service.generate_token(&user)?; let refresh_token = state.auth_service.generate_refresh_token(&user)?; Ok(Json(AuthResponse { access_token: token, refresh_token, token_type: "Bearer".to_string(), expires_in: 3600, requires_mfa: false, })) } #[post("/login/mfa")] pub async fn login_mfa( State(state): State<AppState>, Json(payload): Json<MfaLoginRequest>, ) -> Result<Json<AuthResponse>, AppError> { // Validate MFA token to get user ID let user_id = state.auth_service.validate_mfa_token(&payload.mfa_token)?; // Verify TOTP code let valid = state.mfa_service.verify_code(user_id, &payload.code).await?; if !valid { return Err(AppError::InvalidMfaCode); } // Get user let user = state.user_service.get_by_id(user_id).await?; // Generate JWT token let token = state.auth_service.generate_token(&user)?; let refresh_token = state.auth_service.generate_refresh_token(&user)?; Ok(Json(AuthResponse { access_token: token, refresh_token, token_type: "Bearer".to_string(), expires_in: 3600, requires_mfa: false, })) } }
Session-Based Authentication
For traditional web applications, Navius also supports session-based authentication.
Configuration
auth:
session:
enabled: true
cookie_name: "navius_session"
cookie_secure: true
cookie_http_only: true
cookie_same_site: "Lax"
expiry_hours: 24
redis:
enabled: true
url: "${REDIS_URL}"
Implementation
- Setup session middleware:
#![allow(unused)] fn main() { // In your main.rs let session_store = RedisSessionStore::new( &config.auth.session.redis.url ).await?; let app = Router::new() // ... routes .layer( SessionLayer::new( session_store, &config.auth.session.secret.as_bytes(), ) .with_secure(config.auth.session.cookie_secure) .with_http_only(config.auth.session.cookie_http_only) .with_same_site(config.auth.session.cookie_same_site) .with_expiry(time::Duration::hours( config.auth.session.expiry_hours )) ); }
- Session-based login:
#![allow(unused)] fn main() { #[post("/login")] pub async fn login( mut session: Session, State(state): State<AppState>, Form(credentials): Form<LoginCredentials>, ) -> Result<impl IntoResponse, AppError> { // Validate credentials let user = state.user_service.authenticate( &credentials.username, &credentials.password, ).await?; // Store user in session session.insert("user_id", user.id)?; session.insert("username", user.username.clone())?; // Redirect to dashboard Ok(Redirect::to("/dashboard")) } }
- Access session data:
#![allow(unused)] fn main() { #[get("/dashboard")] pub async fn dashboard( session: Session, State(state): State<AppState>, ) -> Result<impl IntoResponse, AppError> { // Get user from session let user_id: Uuid = session.get("user_id") .ok_or(AppError::Unauthorized("Not logged in".into()))?; let user = state.user_service.get_by_id(user_id).await?; Ok(HtmlTemplate::new("dashboard", json!({ "user": user, }))) } }
Testing Authentication
Navius provides utilities for testing authenticated endpoints.
#![allow(unused)] fn main() { #[tokio::test] async fn test_protected_endpoint() { // Create test app let app = TestApp::new().await; // Create a test user let user = app.create_test_user("[email protected]", "password123").await; // Test with authentication let response = app .get("/api/profile") .with_auth_user(&user) // Helper method to add auth headers .send() .await; assert_eq!(response.status(), StatusCode::OK); // Test without authentication let response = app .get("/api/profile") .send() .await; assert_eq!(response.status(), StatusCode::UNAUTHORIZED); } }
Conclusion
Navius provides a comprehensive and flexible authentication system that can be adapted to various application needs. By combining different authentication methods and access control mechanisms, you can create a secure application with the right balance of security and user experience.
For more advanced use cases, refer to the following resources:
Related Documents
- Installation Guide - How to install the application
- Development Workflow - Development best practices
title: "Navius API Integration Guide" description: "A comprehensive guide for integrating external APIs with Navius applications, including best practices, caching strategies, and testing approaches" category: guides tags:
- api
- caching
- integration
- performance
- testing related:
- ../reference/api/README.md
- ../guides/features/authentication.md
- ../guides/development/testing.md
- ../guides/features/caching.md last_updated: March 27, 2025 version: 1.0
Navius API Integration Guide
This guide explains how to integrate external APIs into your Navius application using the built-in API resource abstraction.
Overview
Navius makes it easy to integrate with external APIs by providing:
- π Automatic client generation from OpenAPI schemas
- π‘οΈ Built-in resilience patterns for reliable API calls
- πΎ Intelligent caching to reduce load on downstream APIs
- π Type-safe data transformation using Rust's powerful type system
- π Detailed metrics and logging for API calls
Adding an API Integration
Automated Method (Recommended)
The easiest way to add a new API integration is to use the provided script:
./scripts/add_api.sh <api_name> <api_url> <schema_url> [endpoint_path] [param_name]
For example:
./scripts/add_api.sh petstore https://petstore.swagger.io/v2 https://petstore.swagger.io/v2/swagger.json pet id
This will:
- Generate API client code from the OpenAPI schema
- Create handler functions for the specified endpoint
- Configure routes for the new API
- Add the API to the registry
Manual Method
If you prefer to add an API integration manually:
- Create an API client:
#![allow(unused)] fn main() { pub struct PetstoreClient { base_url: String, http_client: Client, } impl PetstoreClient { pub fn new(base_url: &str) -> Self { Self { base_url: base_url.to_string(), http_client: Client::new(), } } pub async fn get_pet(&self, id: i64) -> Result<Pet, ApiError> { let url = format!("{}/pet/{}", self.base_url, id); let response = self.http_client .get(&url) .send() .await .map_err(|e| ApiError::RequestFailed(e.to_string()))?; if !response.status().is_success() { return Err(ApiError::ResponseError( response.status().as_u16(), format!("API returned error: {}", response.status()), )); } response .json::<Pet>() .await .map_err(|e| ApiError::DeserializationError(e.to_string())) } } }
- Create models:
#![allow(unused)] fn main() { #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Pet { pub id: i64, pub name: String, pub status: String, #[serde(skip_serializing_if = "Option::is_none")] pub category: Option<Category>, #[serde(skip_serializing_if = "Vec::is_empty", default)] pub tags: Vec<Tag>, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Category { pub id: i64, pub name: String, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Tag { pub id: i64, pub name: String, } }
- Implement API resource trait:
#![allow(unused)] fn main() { use navius::core::api::{ApiResource, ApiError}; impl ApiResource for Pet { type Id = i64; fn resource_type() -> &'static str { "pet" } fn api_name() -> &'static str { "Petstore" } } }
- Create handler functions:
#![allow(unused)] fn main() { use navius::core::api::{create_api_handler, ApiHandlerOptions}; pub async fn get_pet_handler( State(state): State<AppState>, Path(id): Path<i64>, ) -> Result<Json<Pet>, AppError> { // Create API handler with reliability features let handler = create_api_handler( |state, id| async move { let client = &state.petstore_client; client.get_pet(id).await }, ApiHandlerOptions { use_cache: true, use_retries: true, max_retry_attempts: 3, cache_ttl_seconds: 300, detailed_logging: true, }, ); handler(State(state), Path(id)).await } }
- Add routes:
#![allow(unused)] fn main() { // In your router setup let api_routes = Router::new() .route("/pet/:id", get(get_pet_handler)); }
API Resource Abstractions
The ApiResource
trait provides the foundation for the API abstraction:
#![allow(unused)] fn main() { pub trait ApiResource: Sized + Send + Sync + 'static { type Id: Display + Eq + Hash + Clone + Send + Sync + 'static; fn resource_type() -> &'static str; fn api_name() -> &'static str; } }
This allows the framework to automatically provide:
- Consistent caching of API responses
- Unified error handling
- Standardized logging patterns
- Metrics collection for API calls
- Retry logic with backoff
Reliability Patterns
Navius implements several reliability patterns for API integrations:
Caching
API responses are automatically cached using the configured cache implementation:
#![allow(unused)] fn main() { // Configure caching options ApiHandlerOptions { use_cache: true, cache_ttl_seconds: 300, // Cache for 5 minutes // ... } }
Retry Logic
Failed API calls can be automatically retried with exponential backoff:
#![allow(unused)] fn main() { // Configure retry options ApiHandlerOptions { use_retries: true, max_retry_attempts: 3, // ... } }
Circuit Breaking
Navius implements circuit breaking to prevent cascading failures:
#![allow(unused)] fn main() { // Enable circuit breaking let circuit_breaker = CircuitBreaker::new( "petstore", CircuitBreakerConfig { success_threshold: 2, timeout_ms: 1000, half_open_timeout_ms: 5000, }, ); // Apply to client let client = PetstoreClient::new("https://petstore.swagger.io/v2") .with_circuit_breaker(circuit_breaker); }
Handling Errors
Navius provides a standardized error handling pattern for API calls:
#![allow(unused)] fn main() { // In your AppError implementation #[derive(Debug, Error)] pub enum AppError { #[error("API Error: {0}")] Api(#[from] ApiError), // Other error types... } // ApiError is provided by the framework #[derive(Debug, Error)] pub enum ApiError { #[error("Request failed: {0}")] RequestFailed(String), #[error("Response error ({0}): {1}")] ResponseError(u16, String), #[error("Deserialization error: {0}")] DeserializationError(String), #[error("Resource not found")] NotFound, #[error("Circuit open")] CircuitOpen, #[error("Request timeout")] Timeout, } }
Advanced Configuration
Manual Cache Control
You can manually control caching for specific scenarios:
#![allow(unused)] fn main() { // Force refresh from the source API let handler = create_api_handler( |state, id| async move { let client = &state.petstore_client; client.get_pet(id).await }, ApiHandlerOptions { use_cache: true, force_refresh: true, // Skip cache and update it cache_ttl_seconds: 300, // ... }, ); }
Custom Cache Keys
For complex scenarios, you can provide custom cache key generation:
#![allow(unused)] fn main() { // Custom cache key generation let handler = create_api_handler_with_options( |state, id| async move { let client = &state.petstore_client; client.get_pet(id).await }, |id| format!("custom:pet:{}", id), // Custom cache key ApiHandlerOptions { use_cache: true, // ... }, ); }
Custom Response Transformation
Transform API responses before returning them:
#![allow(unused)] fn main() { // Transform the API response let handler = create_api_handler_with_transform( |state, id| async move { let client = &state.petstore_client; client.get_pet(id).await }, |pet| { // Transform the pet before returning PetResponse { id: pet.id, name: pet.name, status: pet.status, // Additional transformations... } }, ApiHandlerOptions { // ... }, ); }
Testing API Integrations
Navius provides utilities for testing API integrations:
#![allow(unused)] fn main() { use navius::test::api::{MockApiClient, ResponseBuilder}; #[tokio::test] async fn test_pet_handler() { // Create mock API client let mut mock_client = MockApiClient::new(); // Configure mock response mock_client.expect_get_pet() .with(eq(1)) .times(1) .returning(|_| { Ok(Pet { id: 1, name: "Rex".to_string(), status: "available".to_string(), category: None, tags: vec![], }) }); // Create test app with mock client let app = test::build_app().with_api_client(mock_client).await; // Test the handler let response = app .get("/pet/1") .send() .await; assert_eq!(response.status(), StatusCode::OK); let pet: Pet = response.json().await; assert_eq!(pet.id, 1); assert_eq!(pet.name, "Rex"); } }
Performance Considerations
When integrating APIs, consider:
- Caching Strategy: Choose appropriate TTL values based on data freshness requirements
- Batch Operations: Use batch endpoints where available instead of multiple single-item calls
- Concurrent Requests: Use
futures::future::join_all
for parallel API calls - Response Size: Request only the fields you need if the API supports field filtering
- Timeouts: Configure appropriate timeouts to prevent blocking application threads
Conclusion
Navius provides a comprehensive API integration framework that makes it easy to connect to external services while maintaining resilience, performance, and code quality. By using the API resource abstraction pattern, you can ensure consistent patterns for all API integrations in your application.
For more complex scenarios or custom integrations, you can extend the framework's base components to implement domain-specific functionality while still benefiting from the built-in reliability features.
Related Documents
- Installation Guide - How to install the application
- Development Workflow - Development best practices
title: Navius API Design Guide description: Best practices for designing and implementing APIs in Navius applications category: guides tags:
- api
- integration
- design
- patterns related:
- ../development/testing.md
- ../../reference/architecture/principles.md
- api-integration.md last_updated: March 27, 2025 version: 1.0
Navius API Design Guide
Overview
This guide outlines best practices and patterns for designing APIs in Navius applications. It covers API design principles, implementation approaches, error handling strategies, and performance considerations to help you build consistent, maintainable, and user-friendly APIs.
Prerequisites
Before using this guide, you should have:
- Basic understanding of RESTful API principles
- Familiarity with Rust and Navius framework basics
- Knowledge of HTTP status codes and request/response patterns
API Design Principles
Navius follows these core API design principles:
- Resource-Oriented Design: Focus on resources and their representations
- Predictable URLs: Use consistent URL patterns for resources
- Proper HTTP Methods: Use appropriate HTTP methods for operations
- Consistent Error Handling: Standardize error responses
- Versioned APIs: Support API versioning for backward compatibility
Step-by-step API Design
1. Define Your Resources
Start by identifying the core resources in your application domain:
#![allow(unused)] fn main() { // Example resource definitions pub struct User { pub id: Uuid, pub email: String, pub name: String, pub role: UserRole, pub created_at: DateTime<Utc>, pub updated_at: DateTime<Utc>, } pub struct Post { pub id: Uuid, pub title: String, pub content: String, pub author_id: Uuid, pub published: bool, pub created_at: DateTime<Utc>, pub updated_at: DateTime<Utc>, } }
2. Design Resource URLs
Use consistent URL patterns for resources:
Resource | URL Pattern | Description |
---|---|---|
Collection | /api/v1/users | The collection of all users |
Individual | /api/v1/users/{id} | A specific user by ID |
Sub-collection | /api/v1/users/{id}/posts | All posts for a user |
Sub-resource | /api/v1/users/{id}/posts/{post_id} | A specific post for a user |
3. Choose HTTP Methods
Map operations to appropriate HTTP methods:
Operation | HTTP Method | URL | Description |
---|---|---|---|
List | GET | /api/v1/users | Get all users (paginated) |
Read | GET | /api/v1/users/{id} | Get a specific user |
Create | POST | /api/v1/users | Create a new user |
Update | PUT/PATCH | /api/v1/users/{id} | Update a user |
Delete | DELETE | /api/v1/users/{id} | Delete a user |
4. Define Request/Response Schemas
Create clear input and output schemas:
#![allow(unused)] fn main() { // Request schema #[derive(Debug, Deserialize, Validate)] pub struct CreateUserRequest { #[validate(email)] pub email: String, #[validate(length(min = 2, max = 100))] pub name: String, #[validate(length(min = 8))] pub password: String, } // Response schema #[derive(Debug, Serialize)] pub struct UserResponse { pub id: Uuid, pub email: String, pub name: String, pub role: String, pub created_at: DateTime<Utc>, } }
5. Implement Route Handlers
Create handlers that process requests:
#![allow(unused)] fn main() { pub async fn get_user( State(state): State<AppState>, Path(id): Path<Uuid>, ) -> Result<Json<UserResponse>, AppError> { let user = state.user_service.get_user(id).await?; Ok(Json(UserResponse { id: user.id, email: user.email, name: user.name, role: user.role.to_string(), created_at: user.created_at, })) } pub async fn create_user( State(state): State<AppState>, Json(request): Json<CreateUserRequest>, ) -> Result<(StatusCode, Json<UserResponse>), AppError> { // Validate the request request.validate()?; // Create the user let user = state.user_service.create_user(request).await?; // Return 201 Created with the user response Ok(( StatusCode::CREATED, Json(UserResponse { id: user.id, email: user.email, name: user.name, role: user.role.to_string(), created_at: user.created_at, }), )) } }
6. Register API Routes
Register your API routes with the router:
#![allow(unused)] fn main() { pub fn user_routes() -> Router<AppState> { Router::new() .route("/users", get(list_users).post(create_user)) .route("/users/:id", get(get_user).put(update_user).delete(delete_user)) .route("/users/:id/posts", get(list_user_posts)) } // In your main router let api_router = Router::new() .nest("/v1", user_routes()) .layer(ValidateRequestHeaderLayer::bearer()) .layer(Extension(rate_limiter)); }
API Error Handling
Standard Error Response Format
Navius uses a consistent error format:
{
"error": {
"type": "validation_error",
"message": "The request was invalid",
"details": [
{
"field": "email",
"message": "Must be a valid email address"
}
]
}
}
Implementing Error Handling
Use the AppError
type for error handling:
#![allow(unused)] fn main() { #[derive(Debug, Error)] pub enum AppError { #[error("Resource not found")] NotFound, #[error("Unauthorized")] Unauthorized, #[error("Forbidden")] Forbidden, #[error("Validation error")] Validation(#[from] ValidationError), #[error("Internal server error: {0}")] Internal(String), #[error("Database error: {0}")] Database(#[from] sqlx::Error), } impl IntoResponse for AppError { fn into_response(self) -> Response { let (status, error_type, message) = match &self { AppError::NotFound => (StatusCode::NOT_FOUND, "not_found", self.to_string()), AppError::Unauthorized => (StatusCode::UNAUTHORIZED, "unauthorized", self.to_string()), AppError::Forbidden => (StatusCode::FORBIDDEN, "forbidden", self.to_string()), AppError::Validation(e) => (StatusCode::BAD_REQUEST, "validation_error", self.to_string()), AppError::Internal(_) => ( StatusCode::INTERNAL_SERVER_ERROR, "internal_error", "An internal server error occurred".to_string(), ), AppError::Database(_) => ( StatusCode::INTERNAL_SERVER_ERROR, "database_error", "A database error occurred".to_string(), ), }; let error_response = json!({ "error": { "type": error_type, "message": message, "details": get_error_details(&self), } }); (status, Json(error_response)).into_response() } } }
Validation
Request Validation
Navius leverages the validator
crate for request validation:
#![allow(unused)] fn main() { #[derive(Debug, Deserialize, Validate)] pub struct CreatePostRequest { #[validate(length(min = 1, max = 200))] pub title: String, #[validate(length(min = 1))] pub content: String, #[serde(default)] pub published: bool, } // In your handler pub async fn create_post( State(state): State<AppState>, Path(user_id): Path<Uuid>, Json(request): Json<CreatePostRequest>, ) -> Result<(StatusCode, Json<PostResponse>), AppError> { // Validate the request request.validate()?; // Create the post let post = state.post_service.create_post(user_id, request).await?; // Return 201 Created Ok((StatusCode::CREATED, Json(post.into()))) } }
Versioning Strategies
Navius supports these API versioning strategies:
URL Versioning
/api/v1/users
/api/v2/users
This is implemented by nesting routes:
#![allow(unused)] fn main() { let api_router = Router::new() .nest("/v1", v1_routes()) .nest("/v2", v2_routes()); }
Header Versioning
GET /api/users
Accept-Version: v1
This requires a custom extractor:
#![allow(unused)] fn main() { pub struct ApiVersion(pub String); #[async_trait] impl<S> FromRequestParts<S> for ApiVersion where S: Send + Sync, { type Rejection = AppError; async fn from_request_parts(parts: &mut RequestParts, _state: &S) -> Result<Self, Self::Rejection> { let version = parts .headers .get("Accept-Version") .and_then(|v| v.to_str().ok()) .unwrap_or("v1") .to_string(); Ok(ApiVersion(version)) } } // Using in a handler pub async fn get_user( State(state): State<AppState>, Path(id): Path<Uuid>, ApiVersion(version): ApiVersion, ) -> Result<Json<UserResponse>, AppError> { match version.as_str() { "v1" => { let user = state.user_service.get_user(id).await?; Ok(Json(v1::UserResponse::from(user))) } "v2" => { let user = state.user_service.get_user_with_details(id).await?; Ok(Json(v2::UserResponse::from(user))) } _ => Err(AppError::NotFound), } } }
Performance Optimization
Pagination
Implement consistent pagination for collection endpoints:
#![allow(unused)] fn main() { #[derive(Debug, Deserialize)] pub struct PaginationParams { #[serde(default = "default_page")] pub page: usize, #[serde(default = "default_page_size")] pub page_size: usize, } fn default_page() -> usize { 1 } fn default_page_size() -> usize { 20 } // Using in a handler pub async fn list_users( State(state): State<AppState>, Query(pagination): Query<PaginationParams>, ) -> Result<Json<PaginatedResponse<UserResponse>>, AppError> { let (users, total) = state.user_service .list_users(pagination.page, pagination.page_size) .await?; let response = PaginatedResponse { data: users.into_iter().map(UserResponse::from).collect(), page: pagination.page, page_size: pagination.page_size, total, }; Ok(Json(response)) } }
Filtering and Sorting
Support consistent query parameters for filtering and sorting:
#![allow(unused)] fn main() { #[derive(Debug, Deserialize)] pub struct UserFilterParams { pub role: Option<String>, pub search: Option<String>, pub sort_by: Option<String>, pub sort_order: Option<String>, } // Using in a handler pub async fn list_users( State(state): State<AppState>, Query(pagination): Query<PaginationParams>, Query(filter): Query<UserFilterParams>, ) -> Result<Json<PaginatedResponse<UserResponse>>, AppError> { let (users, total) = state.user_service .list_users_with_filter( pagination.page, pagination.page_size, filter, ) .await?; // Create response let response = PaginatedResponse { /*...*/ }; Ok(Json(response)) } }
Testing API Endpoints
Navius provides utilities for API testing:
#![allow(unused)] fn main() { #[tokio::test] async fn test_create_user() { // Create test app let app = TestApp::new().await; // Create test request let request = json!({ "email": "[email protected]", "name": "Test User", "password": "password123" }); // Send request let response = app .post("/api/v1/users") .json(&request) .send() .await; // Assert response assert_eq!(response.status(), 201); let user: UserResponse = response.json().await; assert_eq!(user.email, "[email protected]"); assert_eq!(user.name, "Test User"); } }
API Documentation
OpenAPI Integration
Navius supports OpenAPI documentation generation:
#![allow(unused)] fn main() { // In your main.rs let api_docs = OpenApiDocumentBuilder::new() .title("Navius API") .version("1.0.0") .description("API for Navius application") .build(); // Register documentation routes let app = Router::new() .nest("/api", api_router) .nest("/docs", OpenApiRouter::new(api_docs)); }
Related Documents
- API Integration Guide - Integrating with external APIs
- Authentication Guide - API authentication
- Testing Guide - Testing API endpoints
title: "Navius PostgreSQL Integration Guide" description: "A comprehensive guide to integrating PostgreSQL databases with Navius applications, including connection management, migrations, query optimization, and AWS RDS deployment" category: guides tags:
- database
- postgresql
- aws-rds
- migrations
- orm
- performance
- connection-pooling
- transactions related:
- ../reference/api/database-api.md
- ../guides/deployment/aws-deployment.md
- ../guides/features/caching.md
- ../reference/configuration/environment-variables.md last_updated: March 27, 2025 version: 1.0
Navius PostgreSQL Integration Guide
This guide explains how to implement PostgreSQL database connections in your Navius application, leveraging the framework's powerful database abstraction layer.
π Local Development Setup
For local development, Navius provides a Docker Compose configuration in test/resources/docker/docker-compose.dev.yml
.
To start a PostgreSQL instance:
# From the project root:
cd test/resources/docker
docker-compose -f docker-compose.dev.yml up -d
This creates a PostgreSQL database with the following connection details:
- Host: localhost
- Port: 5432
- User: postgres
- Password: postgres
- Database: app
βοΈ Database Configuration
Ensure your config/development.yaml
has the database section enabled:
database:
enabled: true
url: "postgres://postgres:postgres@localhost:5432/app"
max_connections: 10
connect_timeout_seconds: 30
idle_timeout_seconds: 300
π Implementation Steps
Navius makes it easy to integrate with PostgreSQL for data persistence. Follow these steps:
1. Add SQLx Dependency
Navius uses SQLx as its preferred database library. Add it to your Cargo.toml
if it's not already included:
[dependencies]
# ... existing dependencies
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "postgres", "macros", "chrono", "uuid", "json"] }
2. Implement Database Connection
Update the src/core/database/connection.rs
to use a real PostgreSQL connection:
#![allow(unused)] fn main() { use sqlx::{postgres::PgPoolOptions, PgPool as SqlxPgPool}; // Replace the mock implementation with a real one #[derive(Debug, Clone)] pub struct DatabaseConnection { config: DatabaseConfig, pool: SqlxPgPool, } impl DatabaseConnection { pub async fn new(config: &DatabaseConfig) -> Result<Self, DatabaseError> { let pool = PgPoolOptions::new() .max_connections(config.max_connections) .connect_timeout(Duration::from_secs(config.connect_timeout_seconds)) .idle_timeout(config.idle_timeout_seconds.map(Duration::from_secs)) .connect(&config.url) .await .map_err(|e| DatabaseError::ConnectionFailed(e.to_string()))?; Ok(Self { config: config.clone(), pool, }) } } #[async_trait] impl PgPool for DatabaseConnection { async fn ping(&self) -> Result<(), AppError> { sqlx::query("SELECT 1") .execute(&self.pool) .await .map_err(|e| AppError::DatabaseError(format!("Database ping failed: {}", e)))?; Ok(()) } async fn begin(&self) -> Result<Box<dyn PgTransaction>, AppError> { let tx = self.pool .begin() .await .map_err(|e| AppError::DatabaseError(format!("Failed to begin transaction: {}", e)))?; Ok(Box::new(SqlxTransaction { tx })) } } // Transaction implementation struct SqlxTransaction { tx: sqlx::Transaction<'static, sqlx::Postgres>, } #[async_trait] impl PgTransaction for SqlxTransaction { async fn commit(self: Box<Self>) -> Result<(), AppError> { self.tx .commit() .await .map_err(|e| AppError::DatabaseError(format!("Failed to commit transaction: {}", e)))?; Ok(()) } async fn rollback(self: Box<Self>) -> Result<(), AppError> { self.tx .rollback() .await .map_err(|e| AppError::DatabaseError(format!("Failed to rollback transaction: {}", e)))?; Ok(()) } } }
3. Update Database Initialization
Modify the src/core/database/mod.rs
to use the new implementation:
#![allow(unused)] fn main() { pub async fn init_database(config: &DatabaseConfig) -> Result<Arc<Box<dyn PgPool>>, AppError> { if !config.enabled { tracing::info!("Database is disabled in configuration"); return Err(AppError::DatabaseError("Database is disabled".into())); } tracing::info!("Initializing database connection to {}", config.url); let conn = DatabaseConnection::new(config) .await .map_err(|e| AppError::DatabaseError(format!("Failed to initialize database: {}", e)))?; // Return an Arc boxed as PgPool trait object Ok(Arc::new(Box::new(conn) as Box<dyn PgPool>)) } }
4. Implement Repository With SQL
Update the repository implementations to use SQL queries instead of in-memory storage:
#![allow(unused)] fn main() { // Example for UserRepository impl UserRepository { pub async fn find_by_username(&self, username: &str) -> Result<Option<User>, AppError> { let db_pool = self.db_pool(); let tx = db_pool.begin().await?; let user = sqlx::query_as!( User, r#" SELECT id, username, email, full_name, is_active, role as "role: UserRole", created_at, updated_at FROM users WHERE username = $1 "#, username ) .fetch_optional(&*tx) .await .map_err(|e| AppError::DatabaseError(format!("Failed to fetch user: {}", e)))?; tx.commit().await?; Ok(user) } } #[async_trait] impl Repository<User, Uuid> for UserRepository { async fn find_by_id(&self, id: Uuid) -> Result<Option<User>, AppError> { let db_pool = self.db_pool(); let tx = db_pool.begin().await?; let user = sqlx::query_as!( User, r#" SELECT id, username, email, full_name, is_active, role as "role: UserRole", created_at, updated_at FROM users WHERE id = $1 "#, id ) .fetch_optional(&*tx) .await .map_err(|e| AppError::DatabaseError(format!("Failed to fetch user: {}", e)))?; tx.commit().await?; Ok(user) } async fn save(&self, entity: User) -> Result<User, AppError> { let db_pool = self.db_pool(); let tx = db_pool.begin().await?; let user = sqlx::query_as!( User, r#" INSERT INTO users (id, username, email, full_name, is_active, role, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, $6, $7, $8) ON CONFLICT (id) DO UPDATE SET email = $3, full_name = $4, is_active = $5, role = $6, updated_at = $8 RETURNING id, username, email, full_name, is_active, role as "role: UserRole", created_at, updated_at "#, entity.id, entity.username, entity.email, entity.full_name, entity.is_active, entity.role as UserRole, entity.created_at, entity.updated_at ) .fetch_one(&*tx) .await .map_err(|e| AppError::DatabaseError(format!("Failed to save user: {}", e)))?; tx.commit().await?; Ok(user) } // Implement other repository methods similarly } }
5. Create Migrations
Create migrations in the app database folder:
-- src/app/database/migrations/001_create_users_table.sql
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY,
username VARCHAR(255) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL UNIQUE,
full_name VARCHAR(255),
is_active BOOLEAN NOT NULL DEFAULT TRUE,
role VARCHAR(50) NOT NULL,
created_at TIMESTAMPTZ NOT NULL,
updated_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX idx_users_username ON users(username);
CREATE INDEX idx_users_email ON users(email);
6. Implement Migration Runner
Add a function to run migrations during server startup:
#![allow(unused)] fn main() { pub async fn run_migrations(pool: &SqlxPgPool) -> Result<(), AppError> { sqlx::migrate!("./src/app/database/migrations") .run(pool) .await .map_err(|e| AppError::DatabaseError(format!("Migration failed: {}", e)))?; tracing::info!("Database migrations completed successfully"); Ok(()) } }
Testing PostgreSQL Implementation
Add integration tests that use a real PostgreSQL database:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; use sqlx::postgres::PgPoolOptions; use uuid::Uuid; async fn setup_test_db() -> SqlxPgPool { // Use a unique database name for each test run let db_name = format!("test_db_{}", Uuid::new_v4().simple()); // Connect to PostgreSQL server let admin_pool = PgPoolOptions::new() .max_connections(5) .connect("postgres://postgres:postgres@localhost:5432/postgres") .await .expect("Failed to connect to PostgreSQL"); // Create test database sqlx::query(&format!("CREATE DATABASE {}", db_name)) .execute(&admin_pool) .await .expect("Failed to create test database"); // Connect to test database let pool = PgPoolOptions::new() .max_connections(5) .connect(&format!("postgres://postgres:postgres@localhost:5432/{}", db_name)) .await .expect("Failed to connect to test database"); // Run migrations sqlx::migrate!("./src/app/database/migrations") .run(&pool) .await .expect("Failed to run migrations"); pool } #[tokio::test] async fn test_user_repository() { let pool = setup_test_db().await; // Create repository let repo = UserRepository::new(pool); // Create user let user = User::new( "testuser".to_string(), "[email protected]".to_string(), ); // Save user let saved_user = repo.save(user).await.expect("Failed to save user"); // Retrieve user by ID let retrieved_user = repo.find_by_id(saved_user.id).await.expect("Failed to find user"); assert!(retrieved_user.is_some()); let retrieved_user = retrieved_user.unwrap(); assert_eq!(retrieved_user.username, "testuser"); // Retrieve user by username let by_username = repo.find_by_username("testuser").await.expect("Failed to find user"); assert!(by_username.is_some()); assert_eq!(by_username.unwrap().id, saved_user.id); // Update user let mut updated_user = retrieved_user; updated_user.email = "[email protected]".to_string(); let saved_updated = repo.save(updated_user).await.expect("Failed to update user"); assert_eq!(saved_updated.email, "[email protected]"); // Delete user let deleted = repo.delete(saved_user.id).await.expect("Failed to delete user"); assert!(deleted); // Verify deletion let should_be_none = repo.find_by_id(saved_user.id).await.expect("Query failed"); assert!(should_be_none.is_none()); } } }
Production Considerations
For production environments:
- Use AWS RDS for PostgreSQL
- Configure connection pooling appropriately
- Use IAM authentication for database access
- Enable encryption in transit and at rest
- Set up automated backups
- Configure monitoring and alerts
- Use read replicas for read-heavy workloads
Best Practices
- Use prepared statements for all database queries
- Implement proper error handling and retries
- Keep transactions as short as possible
- Use connection pooling to manage database connections
- Implement database migrations for schema changes
- Use database indexes for performance
- Write integration tests against a real database
- Monitor query performance and optimize slow queries
Related Documents
- Installation Guide - How to install the application
- Development Workflow - Development best practices
title: Redis Caching Guide description: "Implementation guide for basic Redis caching in Navius applications" category: guides tags:
- features
- caching
- redis
- performance related:
- ../caching-strategies.md
- ../../reference/configuration/cache-config.md
- ../../reference/patterns/caching-patterns.md
- ../../examples/two-tier-cache-example.md last_updated: March 27, 2025 version: 1.1
Redis Caching Guide
This guide covers the implementation of basic Redis caching in Navius applications.
Overview
Caching is an essential strategy for improving application performance and reducing database load. Navius provides built-in support for Redis caching, allowing you to easily implement efficient caching in your application.
Basic Redis Caching Setup
Installation
First, ensure Redis is installed and running:
# On macOS using Homebrew
brew install redis
brew services start redis
# On Ubuntu
sudo apt install redis-server
sudo systemctl start redis-server
Configuration
Configure Redis in your application:
# config/default.yaml
cache:
enabled: true
default_provider: "redis"
providers:
- name: "redis"
type: "redis"
connection_string: "redis://localhost:6379"
ttl_seconds: 300 # 5 minutes
See the Cache Configuration Reference for a complete list of options.
Basic Usage
Here's a simple example of using Redis caching in your application:
#![allow(unused)] fn main() { use navius::core::services::cache_service::{CacheService, TypedCache}; use serde::{Serialize, Deserialize}; #[derive(Serialize, Deserialize, Clone, Debug)] struct User { id: String, name: String, email: String, } async fn cache_example(cache_service: &CacheService) -> Result<(), AppError> { // Create a typed cache for User objects let user_cache = cache_service.get_typed_cache::<User>("users").await?; // Store a user in the cache let user = User { id: "123".to_string(), name: "Alice".to_string(), email: "[email protected]".to_string(), }; // Cache with TTL of 5 minutes user_cache.set("user:123", &user, Some(Duration::from_secs(300))).await?; // Retrieve from cache if let Some(cached_user) = user_cache.get("user:123").await? { println!("Found user: {:?}", cached_user); } Ok(()) } }
Cache-Aside Pattern
The most common caching pattern is the cache-aside pattern:
#![allow(unused)] fn main() { async fn get_user(&self, id: &str) -> Result<Option<User>, AppError> { let cache_key = format!("user:{}", id); // Try to get from cache first if let Some(user) = self.cache.get(&cache_key).await? { return Ok(Some(user)); } // Not in cache, get from database if let Some(user) = self.repository.find_by_id(id).await? { // Store in cache for future requests self.cache.set(&cache_key, &user, Some(Duration::from_secs(300))).await?; return Ok(Some(user)); } Ok(None) } }
For more caching patterns, see the Caching Patterns Reference.
Performance Considerations
- TTL (Time To Live): Set appropriate TTL values based on how frequently your data changes
- Cache Keys: Use consistent and descriptive cache key formats
- Cache Size: Monitor memory usage in production environments
- Cache Invalidation: Implement proper cache invalidation strategies
Advanced Caching
For more complex caching needs, Navius provides advanced caching options:
- Two-Tier Caching: Combines in-memory and Redis caching for optimal performance
- Typed Caching: Type-safe caching with automatic serialization/deserialization
- Cache Eviction Policies: Various strategies for cache eviction
For these advanced features, see the Advanced Caching Strategies Guide.
Advanced Caching Options
For more advanced caching scenarios, Navius provides a sophisticated Two-Tier Cache implementation that combines in-memory and Redis caching for optimal performance:
- Advanced Caching Strategies - Learn about the Two-Tier Cache implementation
- Two-Tier Cache API - API reference for the Two-Tier Cache
- Two-Tier Cache Example - Implementation examples
Redis Clustering
For high-availability production environments, consider using Redis clustering:
cache:
providers:
- name: "redis"
type: "redis"
cluster_mode: true
nodes:
- "redis://node1:6379"
- "redis://node2:6379"
- "redis://node3:6379"
Monitoring
Monitor your Redis cache using:
#![allow(unused)] fn main() { // Get cache statistics let stats = cache_service.stats().await?; println!("Hit ratio: {}%", stats.hit_ratio * 100.0); println!("Miss count: {}", stats.miss_count); println!("Size: {} items", stats.size); }
Troubleshooting
Common issues and solutions:
- Connection Failures: Check Redis server status and connection settings
- Serialization Errors: Ensure all cached objects implement Serialize/Deserialize
- Memory Issues: Configure maxmemory and eviction policies in Redis configuration
- Slow Performance: Consider using the Two-Tier Cache implementation for improved performance
Next Steps
- Explore Advanced Caching Strategies for implementing two-tier caching
- Check the Two-Tier Cache Example for implementation details
- Review the Caching Patterns reference for best practices
- Configure your cache using the Cache Configuration Reference
Related Resources
WebSocket Support
title: "Server Customization CLI Guide" description: "Detailed instructions for using the Server Customization System's CLI tool to manage features and create optimized server builds" category: guides tags:
- features
- server-customization
- cli
- performance
- optimization
- deployment related:
- ../../reference/configuration/feature-config.md
- ../../examples/server-customization-example.md
- ../../feature-system.md last_updated: March 27, 2025 version: 1.0
Server Customization CLI Guide
This guide provides detailed instructions for using the Server Customization System's CLI tool to manage features and create customized server builds.
Overview
The Server Customization CLI allows you to:
- List available features and their status
- Enable or disable specific features
- Create custom server builds with selected features
- Save and load feature configurations
- Visualize feature dependencies
- Analyze and optimize dependencies
Installation
The CLI tool is included with the Navius repository. To build it:
# Using cargo
cargo build --bin features_cli
# The binary will be available at
./target/debug/features_cli
Command Reference
Listing Features
To view all available features:
features_cli list
Example output:
Available Features:
β
core Core server functionality (required)
β
config Configuration system (required)
β
metrics Metrics collection and reporting
β advanced_metrics Advanced metrics and custom reporters
β
caching Caching system
β
redis_caching Redis cache provider
β tracing Distributed tracing
β
security Security features
...
To get more detailed information:
features_cli list --verbose
Getting Feature Status
To check the current status of features:
features_cli status
Example output:
Feature Status:
Enabled features: 12
Disabled features: 8
Required dependencies: 3
Enabled:
- core (required)
- config (required)
- error_handling (required)
- metrics
- caching
- redis_caching
- security
...
Disabled:
- advanced_metrics
- tracing
- websocket
...
To check a specific feature:
features_cli status metrics
Enabling Features
To enable a specific feature:
features_cli enable metrics
This will also automatically enable any dependencies.
To enable multiple features:
features_cli enable metrics caching security
Disabling Features
To disable a feature:
features_cli disable tracing
Note that you cannot disable features that others depend on without first disabling the dependent features.
# This will fail if advanced_metrics depends on metrics
features_cli disable metrics
# You need to disable advanced_metrics first
features_cli disable advanced_metrics
features_cli disable metrics
Creating Custom Builds
To create a custom server build with only the features you need:
features_cli build --output=my-custom-server
To specify features for the build:
features_cli build --features=core,metrics,caching,security --output=minimal-server
Saving and Loading Configurations
To save your current feature configuration:
features_cli save my-config.json
To load a saved configuration:
features_cli load my-config.json
You can also save in different formats:
features_cli save --format=yaml my-config.yaml
Visualizing Dependencies
To visualize feature dependencies:
features_cli visualize
This will print an ASCII representation of the dependency tree.
For graphical output:
features_cli visualize --format=dot --output=dependencies.dot
You can then use tools like Graphviz to render the DOT file:
dot -Tpng dependencies.dot -o dependencies.png
Analyzing and Optimizing
To analyze your feature selection:
features_cli analyze
This will show information about:
- Size impact of each feature
- Dependencies between features
- Potentially unused features
- Optimization suggestions
To analyze a specific feature:
features_cli analyze metrics
Interactive Mode
The CLI tool also provides an interactive mode:
features_cli interactive
In interactive mode, you can:
- Navigate through features using arrow keys
- Toggle features on/off with the space bar
- View details about each feature
- See immediate feedback on dependencies
- Save your configuration when done
Configuration Files
You can also define features using configuration files:
# features.yaml
enabled:
- core
- metrics
- caching
- security
disabled:
- tracing
- advanced_metrics
configuration:
metrics:
prometheus_enabled: true
collect_interval_seconds: 15
Load this configuration using:
features_cli load features.yaml
Environment Variables
You can configure the CLI behavior using environment variables:
Variable | Description |
---|---|
NAVIUS_FEATURES_FILE | Default features file to load |
NAVIUS_FEATURES_FORMAT | Default format for saving (json, yaml) |
NAVIUS_FEATURES_CONFIG | Path to the main configuration file |
NAVIUS_FEATURES_OUTPUT_DIR | Default output directory for builds |
Best Practices
- Start Minimal: Begin with only essential features enabled
- Use Configuration Files: Save your feature configuration for consistency across environments
- Analyze First: Use the analyze command to optimize your feature set before building
- Check Dependencies: Be aware of feature dependencies when making changes
- Version Control: Store your feature configurations in version control
Troubleshooting
Common Issues
"Cannot disable required feature"
Core features cannot be disabled. These include:
- core
- config
- error_handling
"Cannot disable dependency"
You're trying to disable a feature that others depend on. Disable the dependent features first.
"Feature not found"
Check the feature name with the list
command. Feature names are case-sensitive.
"Configuration file format not supported"
The CLI supports JSON and YAML formats. Check your file extension.
Examples
Minimal API Server
features_cli enable core api rest security
features_cli disable metrics tracing caching websocket
features_cli build --output=minimal-api-server
Analytics Server
features_cli enable core metrics advanced_metrics tracing structured_logging
features_cli disable api websocket
features_cli build --output=analytics-server
Production Ready Server
features_cli load production-features.yaml
features_cli enable security rate_limiting
features_cli build --output=production-server
title: "Security Guides" description: "Comprehensive guides for implementing and maintaining security in Navius applications, including authentication, authorization, and security best practices" category: "Guides" tags: ["security", "authentication", "authorization", "best practices", "encryption", "vulnerabilities"] last_updated: "April 6, 2025" version: "1.0"
Security Guides
This section contains comprehensive guides for implementing and maintaining security in Navius applications. These guides cover various aspects of application security, from authentication and authorization to secure coding practices and vulnerability management.
Available Guides
Core Security Guides
- Security Best Practices - Essential security practices for Navius applications
- Authentication Implementation - Implementing secure authentication
- Authorization Guide - User permissions and access control
- Data Protection - Securing sensitive data and personally identifiable information
Specific Security Topics
- API Security - Securing API endpoints
- CSRF Protection - Cross-Site Request Forgery prevention
- XSS Prevention - Cross-Site Scripting defenses
- Security Headers - HTTP security header configuration
Security Best Practices
When building Navius applications, follow these security best practices:
- Defense in Depth - Implement multiple layers of security controls
- Least Privilege - Limit access to only what is necessary
- Secure by Default - Ensure security is enabled without user configuration
- Keep Dependencies Updated - Regularly update all dependencies
- Validate All Input - Never trust user input without validation
- Encrypt Sensitive Data - Use strong encryption for sensitive information
- Log Security Events - Maintain detailed logs of security-related events
- Regular Security Testing - Perform security testing as part of development
Security Implementation Workflow
For implementing security in your applications, follow this workflow:
- Identify Assets - Determine what needs to be protected
- Threat Modeling - Identify potential threats and vulnerabilities
- Control Selection - Choose appropriate security controls
- Implementation - Implement security measures
- Testing - Verify security controls work as expected
- Monitoring - Continuously monitor for security issues
- Response - Have a plan for security incidents
Key Security Areas
Authentication
Proper authentication is critical for application security:
- Use multiple factors when possible
- Implement secure password handling
- Manage sessions securely
- Consider passwordless authentication options
Learn more in the Authentication Implementation Guide.
Authorization
Implement robust authorization controls:
- Attribute-based access control
- Role-based permissions
- Resource-level security
- API endpoint protection
Learn more in the Authorization Guide.
Data Security
Protect sensitive data throughout its lifecycle:
- Encryption at rest and in transit
- Secure storage of credentials and secrets
- Data minimization and retention policies
- Secure backup and recovery
Learn more in the Data Protection Guide.
API Security
Secure your API endpoints:
- Authentication for all sensitive endpoints
- Rate limiting and throttling
- Input validation and output sanitization
- API keys and token management
Learn more in the API Security Guide.
Getting Started with Security
If you're new to security in Navius applications, we recommend following this learning path:
- Start with the Security Best Practices Guide for an overview
- Implement secure authentication using the Authentication Implementation Guide
- Define access controls with the Authorization Guide
- Secure your data with the Data Protection Guide
- Protect your API endpoints with the API Security Guide
Related Resources
- Error Handling Guide - Secure error handling
- Configuration Guide - Secure configuration management
- Deployment Guide - Secure deployment practices
- Authentication Example - Authentication implementation example
- Security Standards - Technical security standards
title: "Authentication Implementation Guide" description: "Comprehensive guide for implementing secure authentication in Navius applications, including Microsoft Entra integration and multi-factor authentication" category: "Guides" tags: ["security", "authentication", "Microsoft Entra", "MFA", "tokens", "sessions"] last_updated: "April 6, 2025" version: "1.0"
Authentication Implementation Guide
Overview
This guide provides detailed instructions for implementing secure authentication in Navius applications. Authentication is a critical security component that verifies the identity of users before granting access to protected resources.
Authentication Concepts
Authentication vs. Authorization
- Authentication (covered in this guide) verifies who the user is
- Authorization (covered in Authorization Guide) determines what the user can do
Authentication Factors
Secure authentication typically involves one or more of these factors:
- Knowledge - Something the user knows (password, PIN)
- Possession - Something the user has (mobile device, security key)
- Inherence - Something the user is (fingerprint, facial recognition)
Multi-factor authentication (MFA) combines at least two different factors.
Authentication Options in Navius
Navius supports multiple authentication providers:
- Microsoft Entra ID (formerly Azure AD) - Primary authentication provider
- Local Authentication - Username/password authentication for development
- Custom Providers - Support for implementing custom authentication logic
Microsoft Entra Integration
Configuration
Configure Microsoft Entra in your config/default.yaml
:
auth:
provider: "entra"
entra:
tenant_id: "your-tenant-id"
client_id: "your-client-id"
client_secret: "your-client-secret"
redirect_uri: "https://your-app.com/auth/callback"
scopes: ["openid", "profile", "email"]
Implementation
Implement the authentication flow:
#![allow(unused)] fn main() { use navius::auth::providers::EntraAuthProvider; use navius::auth::{AuthConfig, AuthProvider}; async fn configure_auth(config: &Config) -> Result<impl AuthProvider, Error> { let auth_config = AuthConfig::from_config(config)?; let provider = EntraAuthProvider::new(auth_config)?; Ok(provider) } // In your router setup async fn configure_routes(app: Router, auth_provider: impl AuthProvider) -> Router { app.route("/login", get(login_handler)) .route("/auth/callback", get(auth_callback_handler)) .route("/logout", get(logout_handler)) .with_state(AppState { auth_provider }) } async fn login_handler( State(state): State<AppState>, ) -> impl IntoResponse { // Redirect to Microsoft Entra login let auth_url = state.auth_provider.get_authorization_url()?; Redirect::to(&auth_url) } async fn auth_callback_handler( State(state): State<AppState>, Query(params): Query<HashMap<String, String>>, cookies: Cookies, ) -> impl IntoResponse { // Handle auth callback from Microsoft Entra let code = params.get("code").ok_or(Error::InvalidAuthRequest)?; let token = state.auth_provider.exchange_code_for_token(code).await?; // Set secure session cookie let session = cookies.create_session(&token)?; Redirect::to("/dashboard") } }
Testing Microsoft Entra Integration
For testing, use the development mode with mock responses:
# config/development.yaml
auth:
provider: "entra"
entra:
mock_enabled: true
mock_users:
- email: "[email protected]"
name: "Test User"
id: "test-user-id"
roles: ["user"]
Local Authentication
For development or when Microsoft Entra is not available:
#![allow(unused)] fn main() { use navius::auth::providers::LocalAuthProvider; async fn configure_local_auth() -> impl AuthProvider { let provider = LocalAuthProvider::new() .add_user("admin", "secure-password", vec!["admin"]) .add_user("user", "user-password", vec!["user"]); provider } }
Implementing Multi-Factor Authentication
TOTP (Time-based One-Time Password)
#![allow(unused)] fn main() { use navius::auth::mfa::{TotpService, TotpConfig}; // Initialize TOTP service let totp_service = TotpService::new(TotpConfig { issuer: "Your App Name".to_string(), digits: 6, period: 30, algorithm: "SHA1".to_string(), }); // Generate secret for a user async fn setup_mfa(user_id: Uuid, totp_service: &TotpService) -> Result<String, Error> { let secret = totp_service.generate_secret(); let provisioning_uri = totp_service.get_provisioning_uri(&user.email, &secret); // Store secret in database store_mfa_secret(user_id, &secret).await?; // Return provisioning URI for QR code generation Ok(provisioning_uri) } // Verify TOTP code async fn verify_totp(user_id: Uuid, code: &str, totp_service: &TotpService) -> Result<bool, Error> { let user = get_user(user_id).await?; let is_valid = totp_service.verify(&user.mfa_secret, code)?; Ok(is_valid) } }
WebAuthn (Passwordless) Support
For implementing WebAuthn (FIDO2) passwordless authentication:
#![allow(unused)] fn main() { use navius::auth::webauthn::{WebAuthnService, WebAuthnConfig}; // Initialize WebAuthn service let webauthn_service = WebAuthnService::new(WebAuthnConfig { rp_id: "your-app.com".to_string(), rp_name: "Your App Name".to_string(), origin: "https://your-app.com".to_string(), }); // Register a new credential async fn register_credential( user_id: Uuid, credential: CredentialCreationResponse, webauthn_service: &WebAuthnService, ) -> Result<(), Error> { let credential = webauthn_service.register_credential(user_id, credential).await?; store_credential(user_id, credential).await?; Ok(()) } // Authenticate with a credential async fn authenticate( credential: CredentialAssertionResponse, webauthn_service: &WebAuthnService, ) -> Result<Uuid, Error> { let user_id = webauthn_service.authenticate(credential).await?; Ok(user_id) } }
Token Management
Token Types
Navius uses several token types:
- ID Token: Contains user identity information
- Access Token: Grants access to protected resources
- Refresh Token: Used to obtain new access tokens
Token Storage
Securely store tokens:
#![allow(unused)] fn main() { use navius::auth::tokens::{TokenStore, RedisTokenStore}; // Initialize token store let token_store = RedisTokenStore::new(redis_connection); // Store a token async fn store_token(user_id: Uuid, token: &AuthToken, token_store: &impl TokenStore) -> Result<(), Error> { token_store.store(user_id, token).await?; Ok(()) } // Retrieve a token async fn get_token(user_id: Uuid, token_store: &impl TokenStore) -> Result<AuthToken, Error> { let token = token_store.get(user_id).await?; Ok(token) } // Revoke a token async fn revoke_token(user_id: Uuid, token_store: &impl TokenStore) -> Result<(), Error> { token_store.revoke(user_id).await?; Ok(()) } }
Token Refresh
Implement token refresh to maintain sessions:
#![allow(unused)] fn main() { async fn refresh_token( user_id: Uuid, refresh_token: &str, auth_provider: &impl AuthProvider, token_store: &impl TokenStore, ) -> Result<AuthToken, Error> { let new_token = auth_provider.refresh_token(refresh_token).await?; token_store.store(user_id, &new_token).await?; Ok(new_token) } }
Session Management
Session Configuration
Configure secure sessions:
#![allow(unused)] fn main() { use navius::auth::session::{SessionManager, SessionConfig}; let session_manager = SessionManager::new(SessionConfig { cookie_name: "session".to_string(), cookie_domain: Some("your-app.com".to_string()), cookie_path: "/".to_string(), cookie_secure: true, cookie_http_only: true, cookie_same_site: SameSite::Lax, expiry: Duration::hours(2), }); }
Session Creation and Validation
#![allow(unused)] fn main() { // Create a new session async fn create_session( user_id: Uuid, token: &AuthToken, session_manager: &SessionManager, ) -> Result<Cookie, Error> { let session = session_manager.create_session(user_id, token)?; Ok(session) } // Validate a session async fn validate_session( cookies: &Cookies, session_manager: &SessionManager, ) -> Result<Uuid, Error> { let user_id = session_manager.validate_session(cookies)?; Ok(user_id) } // End a session async fn end_session( cookies: &mut Cookies, session_manager: &SessionManager, ) -> Result<(), Error> { session_manager.end_session(cookies)?; Ok(()) } }
Authentication Middleware
Create middleware to protect routes:
#![allow(unused)] fn main() { use axum::{ extract::{Request, State}, middleware::Next, response::Response, }; // Authentication middleware async fn auth_middleware( State(state): State<AppState>, cookies: Cookies, req: Request, next: Next, ) -> Result<Response, StatusCode> { match state.session_manager.validate_session(&cookies) { Ok(user_id) => { // Add user ID to request extensions let mut req = req; req.extensions_mut().insert(UserId(user_id)); // Continue to handler Ok(next.run(req).await) } Err(_) => { // Redirect to login page Err(StatusCode::UNAUTHORIZED) } } } // Apply middleware to protected routes let app = Router::new() .route("/", get(public_handler)) .route("/dashboard", get(dashboard_handler)) .route_layer(middleware::from_fn_with_state(app_state.clone(), auth_middleware)) .with_state(app_state); }
Security Considerations
Password Policies
Implement strong password policies:
#![allow(unused)] fn main() { use navius::auth::password::{PasswordPolicy, PasswordValidator}; let password_policy = PasswordPolicy { min_length: 12, require_uppercase: true, require_lowercase: true, require_digits: true, require_special_chars: true, max_repeated_chars: 3, }; let validator = PasswordValidator::new(password_policy); fn validate_password(password: &str) -> Result<(), String> { validator.validate(password) } }
Brute Force Protection
Implement account lockout after failed attempts:
#![allow(unused)] fn main() { use navius::auth::protection::{BruteForceProtection, BruteForceConfig}; let protection = BruteForceProtection::new(BruteForceConfig { max_attempts: 5, lockout_duration: Duration::minutes(30), attempt_reset: Duration::hours(24), }); async fn check_login_attempt( username: &str, ip_address: &str, protection: &BruteForceProtection, ) -> Result<(), Error> { protection.check_attempts(username, ip_address).await?; Ok(()) } async fn record_failed_attempt( username: &str, ip_address: &str, protection: &BruteForceProtection, ) -> Result<(), Error> { protection.record_failed_attempt(username, ip_address).await?; Ok(()) } async fn reset_attempts( username: &str, protection: &BruteForceProtection, ) -> Result<(), Error> { protection.reset_attempts(username).await?; Ok(()) } }
Secure Logout
Implement secure logout functionality:
#![allow(unused)] fn main() { async fn logout_handler( State(state): State<AppState>, cookies: Cookies, ) -> impl IntoResponse { // End session state.session_manager.end_session(&cookies)?; // Revoke token if using OAuth if let Some(user_id) = cookies.get_user_id() { state.token_store.revoke(user_id).await?; } Redirect::to("/login") } }
Testing Authentication
Unit Testing
Test authentication components:
#![allow(unused)] fn main() { #[tokio::test] async fn test_token_store() { let store = InMemoryTokenStore::new(); let user_id = Uuid::new_v4(); let token = AuthToken::new("access", "refresh", "id", 3600); store.store(user_id, &token).await.unwrap(); let retrieved = store.get(user_id).await.unwrap(); assert_eq!(token.access_token, retrieved.access_token); } }
Integration Testing
Test the authentication flow:
#![allow(unused)] fn main() { #[tokio::test] async fn test_auth_flow() { // Setup test app with mock auth provider let app = test_app().await; // Test login redirect let response = app.get("/login").send().await; assert_eq!(response.status(), StatusCode::FOUND); // Test callback with mock code let response = app.get("/auth/callback?code=test-code").send().await; assert_eq!(response.status(), StatusCode::FOUND); // Test accessing protected route let response = app.get("/dashboard") .header("Cookie", "session=test-session") .send() .await; assert_eq!(response.status(), StatusCode::OK); } }
Implementing Single Sign-On (SSO)
Enable SSO across multiple applications:
#![allow(unused)] fn main() { auth: provider: "entra" entra: tenant_id: "your-tenant-id" client_id: "your-client-id" client_secret: "your-client-secret" redirect_uri: "https://your-app.com/auth/callback" scopes: ["openid", "profile", "email"] enable_sso: true sso_domains: ["yourdomain.com"] }
Troubleshooting
Common Issues
- Redirect URI Mismatch: Ensure the redirect URI in your application config exactly matches the one registered in Microsoft Entra.
- Token Expiration: Implement proper token refresh handling.
- Clock Skew: TOTP validation can fail if server clocks are not synchronized.
- CORS Issues: Ensure proper CORS configuration when authenticating from SPAs.
Debugging Authentication
Enable debug logging for authentication:
#![allow(unused)] fn main() { // Initialize logger with auth debug enabled tracing_subscriber::fmt() .with_env_filter("navius::auth=debug") .init(); }
Related Resources
- Security Best Practices
- Authorization Guide
- Authentication Example
- Microsoft Entra Documentation
- OWASP Authentication Best Practices
title: "Authorization Guide" description: "Comprehensive guide for implementing role-based and attribute-based authorization in Navius applications" category: "Guides" tags: ["security", "authorization", "roles", "permissions", "access control", "RBAC", "ABAC"] last_updated: "April 6, 2025" version: "1.0"
Authorization Guide
Overview
This guide provides detailed instructions for implementing authorization in Navius applications. Authorization determines what actions authenticated users can perform and what resources they can access.
Authorization Concepts
Authentication vs. Authorization
- Authentication (covered in Authentication Guide) verifies who the user is
- Authorization (covered in this guide) determines what the user can do
Authorization Models
Navius supports multiple authorization models:
- Role-Based Access Control (RBAC) - Permissions assigned to roles, which are assigned to users
- Attribute-Based Access Control (ABAC) - Permissions based on user attributes, resource attributes, and context
- Resource-Based Access Control - Permissions tied directly to resources
Role-Based Access Control (RBAC)
Core Components
RBAC consists of:
- Users - Individuals who need access to the system
- Roles - Named collections of permissions (e.g., Admin, Editor, Viewer)
- Permissions - Specific actions that can be performed (e.g., read, write, delete)
Implementing RBAC in Navius
Configuration
Configure RBAC in your config/default.yaml
:
authorization:
type: "rbac"
default_role: "user"
roles:
- name: "admin"
permissions: ["user:read", "user:write", "user:delete", "config:read", "config:write"]
- name: "editor"
permissions: ["user:read", "user:write", "config:read"]
- name: "viewer"
permissions: ["user:read", "config:read"]
Implementation
Create the authorization service:
#![allow(unused)] fn main() { use navius::auth::authorization::{AuthorizationService, RoleBasedAuthorization}; // Create the authorization service let auth_service = RoleBasedAuthorization::from_config(&config)?; // Check if a user has a permission async fn can_perform_action( user_id: Uuid, permission: &str, auth_service: &impl AuthorizationService, ) -> Result<bool, Error> { let has_permission = auth_service.has_permission(user_id, permission).await?; Ok(has_permission) } }
Middleware
Implement authorization middleware:
#![allow(unused)] fn main() { use axum::{ extract::{Request, State}, middleware::Next, response::Response, }; // Define required permission for route struct RequiredPermission(String); // Authorization middleware async fn authorize_middleware( State(state): State<AppState>, extensions: Extensions, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Get user ID from authenticated session let user_id = extensions.get::<UserId>().ok_or(StatusCode::UNAUTHORIZED)?; // Get required permission from route data let required_permission = req.extensions() .get::<RequiredPermission>() .ok_or(StatusCode::INTERNAL_SERVER_ERROR)?; // Check if user has the required permission let has_permission = state.auth_service .has_permission(user_id.0, &required_permission.0) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; if !has_permission { return Err(StatusCode::FORBIDDEN); } Ok(next.run(req).await) } // Apply middleware to routes let app = Router::new() .route("/users", get(get_users_handler)) .layer(axum::Extension(RequiredPermission("user:read".to_string()))) .route_layer(middleware::from_fn_with_state(app_state.clone(), authorize_middleware)) .with_state(app_state); }
Role Management Functions
#![allow(unused)] fn main() { // Assign a role to a user async fn assign_role( user_id: Uuid, role: &str, auth_service: &impl AuthorizationService, ) -> Result<(), Error> { auth_service.assign_role(user_id, role).await?; Ok(()) } // Remove a role from a user async fn remove_role( user_id: Uuid, role: &str, auth_service: &impl AuthorizationService, ) -> Result<(), Error> { auth_service.remove_role(user_id, role).await?; Ok(()) } // Get all roles for a user async fn get_user_roles( user_id: Uuid, auth_service: &impl AuthorizationService, ) -> Result<Vec<String>, Error> { let roles = auth_service.get_roles(user_id).await?; Ok(roles) } }
Attribute-Based Access Control (ABAC)
ABAC provides more fine-grained control by considering:
- User attributes - Properties of the user (role, department, location, clearance)
- Resource attributes - Properties of the resource being accessed (type, owner, classification)
- Action attributes - Properties of the action being performed (read, write, delete)
- Context attributes - Environmental factors (time, location, device)
Implementing ABAC in Navius
Configuration
Configure ABAC in your config/default.yaml
:
authorization:
type: "abac"
policy_location: "./policies"
default_deny: true
Creating Policies
Define ABAC policies in Rego (Open Policy Agent language):
# policies/user_access.rego
package navius.authorization
default allow = false
# Allow users to read their own profile
allow {
input.action == "read"
input.resource.type == "user_profile"
input.resource.id == input.user.id
}
# Allow users to read public documents
allow {
input.action == "read"
input.resource.type == "document"
input.resource.visibility == "public"
}
# Allow admins to perform any action
allow {
"admin" in input.user.roles
}
# Allow managers to access their team's data
allow {
input.action == "read"
input.resource.type == "team_data"
input.resource.team_id == input.user.team_id
input.user.role == "manager"
}
Implementation
Create the ABAC authorization service:
#![allow(unused)] fn main() { use navius::auth::authorization::{AuthorizationService, AbacAuthorization}; // Create the authorization service let auth_service = AbacAuthorization::new(&config)?; // Check if a user can perform an action on a resource async fn can_access_resource( user: &User, action: &str, resource: &Resource, context: &Context, auth_service: &impl AuthorizationService, ) -> Result<bool, Error> { let access_request = AccessRequest { user, action, resource, context, }; let allowed = auth_service.evaluate(access_request).await?; Ok(allowed) } }
ABAC Middleware
#![allow(unused)] fn main() { // ABAC authorization middleware async fn abac_authorize_middleware( State(state): State<AppState>, extensions: Extensions, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Get user from authenticated session let user = extensions.get::<User>().ok_or(StatusCode::UNAUTHORIZED)?; // Get resource and action from request let resource = extract_resource_from_request(&req)?; let action = extract_action_from_method(req.method())?; // Create context from request let context = Context { time: chrono::Utc::now(), ip_address: req.remote_addr().map(|addr| addr.to_string()), user_agent: req.headers().get("User-Agent").map(|ua| ua.to_str().unwrap_or("").to_string()), }; // Evaluate access request let access_request = AccessRequest { user, action: &action, resource: &resource, context: &context, }; let allowed = state.auth_service .evaluate(access_request) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; if !allowed { return Err(StatusCode::FORBIDDEN); } Ok(next.run(req).await) } }
Resource-Based Access Control
In resource-based access control, permissions are directly tied to resources:
#![allow(unused)] fn main() { // Resource with access control struct Document { id: Uuid, title: String, content: String, owner_id: Uuid, shared_with: Vec<Uuid>, public: bool, } impl Document { // Check if a user can access this document fn can_access(&self, user_id: Uuid) -> bool { self.public || self.owner_id == user_id || self.shared_with.contains(&user_id) } // Check if a user can edit this document fn can_edit(&self, user_id: Uuid) -> bool { self.owner_id == user_id || self.shared_with.contains(&user_id) } // Check if a user can delete this document fn can_delete(&self, user_id: Uuid) -> bool { self.owner_id == user_id } } }
Declarative Authorization with Route Attributes
Navius provides a declarative approach to authorization using route attributes:
#![allow(unused)] fn main() { use navius::auth::authorization::{requires_permission, requires_role}; // Require a specific permission #[get("/users")] #[requires_permission("user:read")] async fn get_users_handler() -> impl IntoResponse { // Handler implementation } // Require a specific role #[post("/users")] #[requires_role("admin")] async fn create_user_handler() -> impl IntoResponse { // Handler implementation } // Multiple requirements #[delete("/users/{id}")] #[requires_permission("user:delete")] #[requires_role("admin")] async fn delete_user_handler() -> impl IntoResponse { // Handler implementation } }
Implementing Permissions Checks in Services
Service Layer Authorization
#![allow(unused)] fn main() { // Authorization in a service layer struct UserService { repository: UserRepository, auth_service: Box<dyn AuthorizationService>, } impl UserService { // Create a new user (requires write permission) async fn create_user(&self, current_user_id: Uuid, new_user: UserCreate) -> Result<User, Error> { // Check if current user has permission let has_permission = self.auth_service .has_permission(current_user_id, "user:write") .await?; if !has_permission { return Err(Error::PermissionDenied); } // Proceed with creation let user = self.repository.create(new_user).await?; Ok(user) } // Get a user by ID (requires read permission) async fn get_user(&self, current_user_id: Uuid, user_id: Uuid) -> Result<User, Error> { // Check if current user has permission let has_permission = self.auth_service .has_permission(current_user_id, "user:read") .await?; if !has_permission { return Err(Error::PermissionDenied); } // Proceed with retrieval let user = self.repository.find_by_id(user_id).await?; Ok(user) } } }
Dynamic Permissions
Permission Delegation
#![allow(unused)] fn main() { // Delegate permissions temporarily async fn delegate_permission( delegator_id: Uuid, delegatee_id: Uuid, permission: &str, duration: Duration, auth_service: &impl AuthorizationService, ) -> Result<(), Error> { // Check if delegator has the permission let has_permission = auth_service .has_permission(delegator_id, permission) .await?; if !has_permission { return Err(Error::PermissionDenied); } // Delegate the permission auth_service .delegate_permission(delegator_id, delegatee_id, permission, duration) .await?; Ok(()) } }
Conditional Permissions
#![allow(unused)] fn main() { // Permission based on resource ownership async fn can_edit_document( user_id: Uuid, document_id: Uuid, document_service: &DocumentService, auth_service: &impl AuthorizationService, ) -> Result<bool, Error> { // Get the document let document = document_service.find_by_id(document_id).await?; // Check if user is the owner if document.owner_id == user_id { return Ok(true); } // Check if user has global edit permission let has_permission = auth_service .has_permission(user_id, "document:edit:any") .await?; Ok(has_permission) } }
Hierarchical Role Structure
Navius supports hierarchical roles:
authorization:
type: "rbac"
hierarchical: true
roles:
- name: "admin"
inherits: ["editor"]
permissions: ["user:delete", "config:write"]
- name: "editor"
inherits: ["viewer"]
permissions: ["user:write"]
- name: "viewer"
permissions: ["user:read", "config:read"]
#![allow(unused)] fn main() { // Implementation with hierarchical roles let auth_service = RoleBasedAuthorization::from_config(&config)?; // Even though a user only has "admin" role, they'll have all permissions // from admin, editor, and viewer roles due to the hierarchy }
Permission Management UI
Role Management Component
#![allow(unused)] fn main() { // Handler for getting all roles async fn get_roles_handler( State(state): State<AppState>, ) -> Result<Json<Vec<Role>>, StatusCode> { let roles = state.auth_service .get_all_roles() .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(Json(roles)) } // Handler for creating a new role async fn create_role_handler( State(state): State<AppState>, Json(payload): Json<RoleCreate>, ) -> Result<Json<Role>, StatusCode> { let role = state.auth_service .create_role(payload.name, payload.permissions) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(Json(role)) } // Handler for updating a role async fn update_role_handler( State(state): State<AppState>, Path(role_name): Path<String>, Json(payload): Json<RoleUpdate>, ) -> Result<Json<Role>, StatusCode> { let role = state.auth_service .update_role(role_name, payload.permissions) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(Json(role)) } }
Testing Authorization
Unit Testing
#![allow(unused)] fn main() { #[tokio::test] async fn test_rbac_authorization() { // Setup test authorization service let mut config = Config::default(); config.set("authorization.type", "rbac").unwrap(); config.set("authorization.roles", vec![ Role { name: "admin".to_string(), permissions: vec!["user:read".to_string(), "user:write".to_string()] }, Role { name: "viewer".to_string(), permissions: vec!["user:read".to_string()] }, ]).unwrap(); let auth_service = RoleBasedAuthorization::from_config(&config).unwrap(); // Assign roles let admin_id = Uuid::new_v4(); let viewer_id = Uuid::new_v4(); auth_service.assign_role(admin_id, "admin").await.unwrap(); auth_service.assign_role(viewer_id, "viewer").await.unwrap(); // Test permissions assert!(auth_service.has_permission(admin_id, "user:read").await.unwrap()); assert!(auth_service.has_permission(admin_id, "user:write").await.unwrap()); assert!(auth_service.has_permission(viewer_id, "user:read").await.unwrap()); assert!(!auth_service.has_permission(viewer_id, "user:write").await.unwrap()); } }
Integration Testing
#![allow(unused)] fn main() { #[tokio::test] async fn test_protected_routes() { // Setup test app with authentication and authorization let app = test_app().await; // Login as admin let admin_token = app.login("admin", "password").await; // Login as viewer let viewer_token = app.login("viewer", "password").await; // Test admin access let response = app.get("/users") .header("Authorization", format!("Bearer {}", admin_token)) .send() .await; assert_eq!(response.status(), StatusCode::OK); let response = app.post("/users") .header("Authorization", format!("Bearer {}", admin_token)) .json(&user_payload) .send() .await; assert_eq!(response.status(), StatusCode::CREATED); // Test viewer access let response = app.get("/users") .header("Authorization", format!("Bearer {}", viewer_token)) .send() .await; assert_eq!(response.status(), StatusCode::OK); let response = app.post("/users") .header("Authorization", format!("Bearer {}", viewer_token)) .json(&user_payload) .send() .await; assert_eq!(response.status(), StatusCode::FORBIDDEN); } }
Best Practices
Principle of Least Privilege
- Assign the minimum permissions necessary
- Regularly review and remove unnecessary permissions
- Use time-limited elevated privileges when needed
Authorization Design
- Design permissions around business operations, not technical operations
- Group related permissions into roles
- Consider the user experience when defining permission granularity
- Document the authorization model
Auditing and Logging
#![allow(unused)] fn main() { // Log permission checks async fn log_permission_check( user_id: Uuid, permission: &str, allowed: bool, auth_service: &impl AuthorizationService, ) -> Result<(), Error> { let timestamp = chrono::Utc::now(); let audit_log = AuditLog { timestamp, user_id, action: "permission_check".to_string(), resource: permission.to_string(), success: allowed, }; auth_service.log_audit_event(audit_log).await?; Ok(()) } }
Troubleshooting
Common Issues
- Missing Permissions: Verify role assignments and permission inheritance
- Authorization Service Misconfiguration: Check YAML configuration for typos
- Middleware Order: Authentication middleware must run before authorization
- Cache Invalidation: Permission changes may be cached; implement proper invalidation
Debugging Authorization
Enable debug logging for authorization:
#![allow(unused)] fn main() { // Initialize logger with auth debug enabled tracing_subscriber::fmt() .with_env_filter("navius::auth::authorization=debug") .init(); }
Related Resources
- Security Best Practices
- Authentication Implementation Guide
- API Security Guide
- Data Protection Guide
- OWASP Authorization Cheat Sheet
title: "Data Protection Guide" description: "Comprehensive guide for implementing data protection in Navius applications, covering encryption, secure storage, data privacy, and compliance" category: "Guides" tags: ["security", "encryption", "data protection", "privacy", "PII", "compliance", "GDPR"] last_updated: "April 7, 2025" version: "1.0"
Data Protection Guide
Overview
This guide provides detailed instructions for implementing data protection in Navius applications. Data protection is critical for safeguarding sensitive information, maintaining user privacy, and ensuring compliance with regulations.
Data Protection Concepts
Types of Sensitive Data
- Personally Identifiable Information (PII) - Information that can identify an individual (names, email addresses, phone numbers)
- Protected Health Information (PHI) - Health-related information protected by regulations like HIPAA
- Financial Data - Payment information, account numbers, financial records
- Authentication Data - Passwords, security questions, biometric data
- Business-Sensitive Data - Intellectual property, trade secrets, business plans
Data Protection Principles
- Data Minimization - Collect and store only what is necessary
- Purpose Limitation - Use data only for its intended purpose
- Storage Limitation - Retain data only as long as necessary
- Integrity and Confidentiality - Protect data from unauthorized access and accidental loss
Encryption Implementation
Configuration
Configure encryption in your config/default.yaml
:
encryption:
provider: "aes_gcm"
key_management: "kms"
kms:
provider: "aws"
key_id: "your-key-id"
region: "us-west-2"
data_key_rotation:
enabled: true
rotation_period_days: 90
Encryption Service
Implement the encryption service:
#![allow(unused)] fn main() { use navius::security::encryption::{EncryptionService, EncryptionConfig}; // Create encryption service async fn create_encryption_service(config: &Config) -> Result<impl EncryptionService, Error> { let encryption_config = EncryptionConfig::from_config(config)?; let encryption_service = EncryptionService::new(encryption_config).await?; Ok(encryption_service) } // Encrypt data async fn encrypt_data<T: Serialize>( data: &T, context: &EncryptionContext, encryption_service: &impl EncryptionService, ) -> Result<EncryptedData, Error> { let encrypted = encryption_service.encrypt(data, context).await?; Ok(encrypted) } // Decrypt data async fn decrypt_data<T: DeserializeOwned>( encrypted: &EncryptedData, context: &EncryptionContext, encryption_service: &impl EncryptionService, ) -> Result<T, Error> { let decrypted = encryption_service.decrypt(encrypted, context).await?; Ok(decrypted) } }
Encryption Context
Use encryption context to bind encryption to a specific context:
#![allow(unused)] fn main() { // Create an encryption context let context = EncryptionContext::new() .with_user_id(user_id) .with_resource_type("payment_info") .with_resource_id(payment_id) .with_purpose("payment_processing"); // Encrypt with context let encrypted = encryption_service.encrypt(&payment_info, &context).await?; // Decrypt with the same context let decrypted: PaymentInfo = encryption_service.decrypt(&encrypted, &context).await?; }
Envelope Encryption
Implement envelope encryption for better key management:
#![allow(unused)] fn main() { // Envelope encryption with data keys async fn envelope_encrypt<T: Serialize>( data: &T, kms_service: &impl KmsService, encryption_service: &impl EncryptionService, ) -> Result<EnvelopeEncryptedData, Error> { // Generate a data key let data_key = kms_service.generate_data_key().await?; // Encrypt data with the data key let encrypted_data = encryption_service .encrypt_with_key(data, &data_key.plaintext) .await?; // Return envelope with encrypted data and encrypted key let envelope = EnvelopeEncryptedData { encrypted_data, encrypted_key: data_key.ciphertext, key_id: data_key.key_id, }; Ok(envelope) } // Envelope decryption async fn envelope_decrypt<T: DeserializeOwned>( envelope: &EnvelopeEncryptedData, kms_service: &impl KmsService, encryption_service: &impl EncryptionService, ) -> Result<T, Error> { // Decrypt the data key let data_key = kms_service .decrypt_data_key(&envelope.encrypted_key, &envelope.key_id) .await?; // Decrypt the data with the data key let decrypted = encryption_service .decrypt_with_key(&envelope.encrypted_data, &data_key) .await?; Ok(decrypted) } }
Secure Database Storage
Encrypted Fields in Database
Define a model with encrypted fields:
#![allow(unused)] fn main() { #[derive(Debug, Serialize, Deserialize)] struct User { id: Uuid, username: String, #[encrypted] email: String, #[encrypted] phone_number: Option<String>, #[encrypted(sensitive = true)] payment_info: Option<PaymentInfo>, created_at: DateTime<Utc>, updated_at: DateTime<Utc>, } }
Database Repository with Encryption
Implement a repository that handles encryption:
#![allow(unused)] fn main() { struct UserRepository { db_pool: PgPool, encryption_service: Box<dyn EncryptionService>, } impl UserRepository { // Create a new user with encrypted fields async fn create(&self, user: User) -> Result<User, Error> { // Create encryption context let context = EncryptionContext::new() .with_resource_type("user") .with_resource_id(user.id); // Encrypt sensitive fields let encrypted_email = self.encryption_service .encrypt(&user.email, &context) .await?; let encrypted_phone = match user.phone_number { Some(phone) => Some(self.encryption_service.encrypt(&phone, &context).await?), None => None, }; let encrypted_payment_info = match user.payment_info { Some(payment) => { let payment_context = context.clone().with_purpose("payment_processing"); Some(self.encryption_service.encrypt(&payment, &payment_context).await?) }, None => None, }; // Store encrypted data in database let query = sqlx::query!( r#" INSERT INTO users (id, username, email, phone_number, payment_info, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, $6, $7) RETURNING id "#, user.id, user.username, encrypted_email.to_string(), encrypted_phone.map(|e| e.to_string()), encrypted_payment_info.map(|e| e.to_string()), user.created_at, user.updated_at, ); let result = query.fetch_one(&self.db_pool).await?; Ok(user) } // Retrieve and decrypt user data async fn get_by_id(&self, id: Uuid) -> Result<User, Error> { // Query database for encrypted user let encrypted_user = sqlx::query!( r#" SELECT id, username, email, phone_number, payment_info, created_at, updated_at FROM users WHERE id = $1 "#, id, ) .fetch_one(&self.db_pool) .await?; // Create encryption context let context = EncryptionContext::new() .with_resource_type("user") .with_resource_id(id); // Decrypt sensitive fields let email = self.encryption_service .decrypt::<String>(&EncryptedData::from_string(&encrypted_user.email)?, &context) .await?; let phone_number = match encrypted_user.phone_number { Some(phone) => { let encrypted = EncryptedData::from_string(&phone)?; Some(self.encryption_service.decrypt::<String>(&encrypted, &context).await?) }, None => None, }; let payment_info = match encrypted_user.payment_info { Some(payment) => { let encrypted = EncryptedData::from_string(&payment)?; let payment_context = context.clone().with_purpose("payment_processing"); Some(self.encryption_service.decrypt::<PaymentInfo>(&encrypted, &payment_context).await?) }, None => None, }; // Construct and return decrypted user let user = User { id: encrypted_user.id, username: encrypted_user.username, email, phone_number, payment_info, created_at: encrypted_user.created_at, updated_at: encrypted_user.updated_at, }; Ok(user) } } }
Data Masking and Anonymization
Data Masking
Implement data masking for displaying sensitive information:
#![allow(unused)] fn main() { use navius::security::masking::{MaskingService, MaskingStrategy}; // Create masking service let masking_service = MaskingService::new(); // Mask PII with different strategies fn mask_user_data(user: &User, masking_service: &MaskingService) -> MaskedUser { MaskedUser { id: user.id, username: user.username.clone(), email: masking_service.mask_email(&user.email), phone_number: user.phone_number.as_ref().map(|p| masking_service.mask_phone(p)), payment_info: user.payment_info.as_ref().map(|p| masking_service.mask_payment_info(p)), } } // Example masking strategies let masked_email = masking_service.mask_email("[email protected]"); // "j***.*****@e******.com" let masked_phone = masking_service.mask_phone("+1-555-123-4567"); // "+*-***-***-4567" let masked_card = masking_service.mask_card_number("4111111111111111"); // "************1111" }
Data Anonymization
Implement data anonymization for analytics:
#![allow(unused)] fn main() { use navius::security::anonymization::{AnonymizationService, AnonymizationStrategy}; // Create anonymization service let anonymization_service = AnonymizationService::new(); // Anonymize data for analytics fn anonymize_user_data(user: &User, anonymization_service: &AnonymizationService) -> AnonymizedUser { AnonymizedUser { id: anonymization_service.hash_id(user.id), age_range: anonymization_service.bin_age(user.age), region: anonymization_service.generalize_location(&user.location), activity_level: anonymization_service.categorize_activity(user.login_count), } } // Implement k-anonymity for data sets async fn get_k_anonymized_dataset( users: Vec<User>, k: usize, anonymization_service: &AnonymizationService, ) -> Result<Vec<AnonymizedUser>, Error> { anonymization_service.k_anonymize(users, k).await } }
Secure File Storage
Encrypted File Storage
Store files securely with encryption:
#![allow(unused)] fn main() { use navius::storage::files::{FileStorage, EncryptedFileStorage}; // Create encrypted file storage let file_storage = EncryptedFileStorage::new( S3FileStorage::new(s3_client), encryption_service, ); // Store a file with encryption async fn store_file( file_data: &[u8], file_name: &str, content_type: &str, user_id: Uuid, file_storage: &impl FileStorage, ) -> Result<FileMetadata, Error> { // Create metadata let metadata = FileMetadata { owner_id: user_id, content_type: content_type.to_string(), original_name: file_name.to_string(), created_at: Utc::now(), }; // Store file with metadata let stored_file = file_storage.store(file_data, metadata).await?; Ok(stored_file.metadata) } // Retrieve a file with decryption async fn get_file( file_id: Uuid, user_id: Uuid, file_storage: &impl FileStorage, ) -> Result<(Vec<u8>, FileMetadata), Error> { // Check access permission if !can_access_file(file_id, user_id).await? { return Err(Error::AccessDenied); } // Retrieve and decrypt file let file = file_storage.get(file_id).await?; Ok((file.data, file.metadata)) } }
Data Privacy Features
Data Subject Rights
Implement features for GDPR compliance:
#![allow(unused)] fn main() { use navius::privacy::{DataSubjectService, DataSubjectRequest}; // Create data subject service let data_subject_service = DataSubjectService::new( user_repository, activity_repository, file_storage, ); // Handle right to access request async fn handle_access_request( user_id: Uuid, data_subject_service: &DataSubjectService, ) -> Result<DataExport, Error> { let request = DataSubjectRequest::new(user_id, RequestType::Access); let export = data_subject_service.process_access_request(request).await?; Ok(export) } // Handle right to erasure (right to be forgotten) async fn handle_erasure_request( user_id: Uuid, data_subject_service: &DataSubjectService, ) -> Result<ErasureConfirmation, Error> { let request = DataSubjectRequest::new(user_id, RequestType::Erasure); let confirmation = data_subject_service.process_erasure_request(request).await?; Ok(confirmation) } // Handle data portability request async fn handle_portability_request( user_id: Uuid, format: ExportFormat, data_subject_service: &DataSubjectService, ) -> Result<PortableData, Error> { let request = DataSubjectRequest::new(user_id, RequestType::Portability) .with_export_format(format); let portable_data = data_subject_service.process_portability_request(request).await?; Ok(portable_data) } }
User Consent Management
Implement consent tracking:
#![allow(unused)] fn main() { use navius::privacy::consent::{ConsentService, ConsentRecord}; // Create consent service let consent_service = ConsentService::new(consent_repository); // Record user consent async fn record_user_consent( user_id: Uuid, purpose: &str, granted: bool, consent_service: &ConsentService, ) -> Result<ConsentRecord, Error> { let consent = ConsentRecord::new( user_id, purpose.to_string(), granted, Utc::now(), ); let saved_consent = consent_service.record_consent(consent).await?; Ok(saved_consent) } // Check if user has consented to a specific purpose async fn has_user_consented( user_id: Uuid, purpose: &str, consent_service: &ConsentService, ) -> Result<bool, Error> { let consented = consent_service.has_consent(user_id, purpose).await?; Ok(consented) } // Revoke consent async fn revoke_consent( user_id: Uuid, purpose: &str, consent_service: &ConsentService, ) -> Result<(), Error> { consent_service.revoke_consent(user_id, purpose).await?; Ok(()) } }
Data Access Audit Logging
Audit Trail Implementation
Create a comprehensive audit trail:
#![allow(unused)] fn main() { use navius::security::audit::{AuditService, AuditEvent, AuditEventType}; // Create audit service let audit_service = AuditService::new(audit_repository); // Log data access event async fn log_data_access( user_id: Uuid, resource_type: &str, resource_id: Uuid, action: &str, audit_service: &AuditService, ) -> Result<(), Error> { let event = AuditEvent::new( user_id, AuditEventType::DataAccess, resource_type.to_string(), resource_id, action.to_string(), Utc::now(), ); audit_service.log_event(event).await?; Ok(()) } // Get audit trail for a resource async fn get_resource_audit_trail( resource_type: &str, resource_id: Uuid, audit_service: &AuditService, ) -> Result<Vec<AuditEvent>, Error> { let events = audit_service .get_events_by_resource(resource_type, resource_id) .await?; Ok(events) } // Get audit trail for a user async fn get_user_audit_trail( user_id: Uuid, audit_service: &AuditService, ) -> Result<Vec<AuditEvent>, Error> { let events = audit_service .get_events_by_user(user_id) .await?; Ok(events) } }
Secure Data Transmission
TLS Configuration
Configure secure transmission:
#![allow(unused)] fn main() { use navius::security::tls::{TlsConfig, TlsVersion}; // Configure TLS settings let tls_config = TlsConfig { minimum_version: TlsVersion::Tls13, certificate_path: "/path/to/cert.pem".to_string(), private_key_path: "/path/to/key.pem".to_string(), verify_client: false, }; // Apply TLS to HTTP server let server = Server::new() .with_tls(tls_config) .bind("0.0.0.0:443") .serve(app); }
Secure Headers
Implement security headers for additional protection:
#![allow(unused)] fn main() { use navius::security::headers::SecurityHeadersLayer; // Add security headers to all responses let app = Router::new() .route("/", get(handler)) .layer(SecurityHeadersLayer::new()); // Security headers include: // - Strict-Transport-Security (HSTS) // - Content-Security-Policy (CSP) // - X-Content-Type-Options // - X-Frame-Options // - Referrer-Policy }
Data Breach Response
Breach Detection
Implement breach detection:
#![allow(unused)] fn main() { use navius::security::breach::{BreachDetectionService, BreachAlert}; // Create breach detection service let breach_detection = BreachDetectionService::new( audit_service, notification_service, ); // Configure breach detection rules breach_detection .add_rule(RateLimitRule::new("login_failure", 10, Duration::minutes(5))) .add_rule(UnusualAccessPatternRule::new()) .add_rule(DataExfiltractionRule::new(1000, Duration::minutes(10))); // Handle breach alert async fn handle_breach_alert( alert: BreachAlert, response_service: &BreachResponseService, ) -> Result<(), Error> { // Log the alert response_service.log_alert(&alert).await?; // Notify security team response_service.notify_security_team(&alert).await?; // Take automated remediation actions match alert.severity { Severity::High => { response_service.lock_affected_accounts(&alert).await?; response_service.revoke_active_sessions(&alert).await?; }, Severity::Medium => { response_service.require_reauthentication(&alert).await?; }, Severity::Low => { // Just monitor } } Ok(()) } }
Testing Data Protection
Unit Testing Encryption
#![allow(unused)] fn main() { #[tokio::test] async fn test_encryption_service() { // Setup test encryption service let config = EncryptionConfig { provider: "aes_gcm".to_string(), key: generate_test_key(), ..Default::default() }; let encryption_service = EncryptionService::new(config).await.unwrap(); // Test data let sensitive_data = "sensitive information"; let context = EncryptionContext::new().with_purpose("test"); // Encrypt data let encrypted = encryption_service.encrypt(&sensitive_data, &context).await.unwrap(); // Verify encrypted data is different from original assert_ne!(encrypted.ciphertext, sensitive_data.as_bytes()); // Decrypt data let decrypted: String = encryption_service.decrypt(&encrypted, &context).await.unwrap(); // Verify decryption works assert_eq!(decrypted, sensitive_data); // Verify wrong context fails let wrong_context = EncryptionContext::new().with_purpose("wrong"); let result: Result<String, _> = encryption_service.decrypt(&encrypted, &wrong_context).await; assert!(result.is_err()); } }
Integration Testing Data Protection
#![allow(unused)] fn main() { #[tokio::test] async fn test_data_protection_integration() { // Setup test app with data protection let app = test_app().await; // Create test user with sensitive data let user_data = UserCreate { username: "testuser".to_string(), email: "[email protected]".to_string(), phone_number: Some("+1-555-123-4567".to_string()), payment_info: Some(PaymentInfo { card_number: "4111111111111111".to_string(), expiry_date: "12/25".to_string(), cardholder_name: "Test User".to_string(), }), }; // Create user let response = app.post("/users") .json(&user_data) .send() .await; assert_eq!(response.status(), StatusCode::CREATED); let user: User = response.json().await; // Verify user was created with correct data assert_eq!(user.username, user_data.username); assert_eq!(user.email, user_data.email); assert_eq!(user.phone_number, user_data.phone_number); // Check database directly to verify encryption let db_user = sqlx::query!("SELECT * FROM users WHERE id = $1", user.id) .fetch_one(&app.db_pool) .await .unwrap(); // Verify sensitive fields are encrypted in database assert_ne!(db_user.email, user_data.email); assert!(db_user.email.starts_with("ENC:")); if let Some(phone) = &db_user.phone_number { assert!(phone.starts_with("ENC:")); } if let Some(payment) = &db_user.payment_info { assert!(payment.starts_with("ENC:")); } } }
Compliance Considerations
GDPR Compliance
Key GDPR requirements for Navius applications:
- Lawful Basis for Processing: Implement consent tracking
- Data Subject Rights: Implement access, erasure, and portability features
- Data Protection by Design: Use encryption and minimization strategies
- Breach Notification: Implement detection and response capabilities
HIPAA Compliance (Healthcare)
Key HIPAA requirements for healthcare applications:
- PHI Encryption: Implement strong encryption for health data
- Access Controls: Implement role-based access control
- Audit Logging: Maintain comprehensive audit trails
- Business Associate Agreements: Enable BAA compliance
PCI DSS Compliance (Payment Data)
Key PCI DSS requirements for payment processing:
- Secure Transmission: Implement TLS for all payment data
- Storage Restrictions: Avoid storing sensitive authentication data
- Encryption: Protect stored cardholder data with strong encryption
- Access Restrictions: Limit access to payment data
Best Practices
Secure Development Practices
- Security Reviews: Conduct regular security reviews of data handling code
- Dependency Scanning: Regularly check dependencies for vulnerabilities
- Security Testing: Include security tests in CI/CD pipeline
- Code Analysis: Use static code analysis tools to identify security issues
Operational Security
- Key Rotation: Regularly rotate encryption keys
- Access Monitoring: Monitor and audit data access
- Security Updates: Keep all systems updated with security patches
- Incident Response: Maintain an incident response plan
Troubleshooting
Common Issues
- Performance Impact: Optimize encryption operations for performance
- Key Management Issues: Ensure proper key backup and recovery
- Integration Challenges: Verify compatibility with existing systems
- Compliance Gaps: Regularly audit against compliance requirements
Debugging Data Protection
#![allow(unused)] fn main() { // Enable detailed logging for data protection components tracing_subscriber::fmt() .with_env_filter("navius::security=debug,navius::privacy=debug") .init(); }
Related Resources
- Security Best Practices
- Authentication Implementation Guide
- Authorization Guide
- API Security Guide
- OWASP Data Protection Cheat Sheet
title: "API Security Guide" description: "Comprehensive guide for securing API endpoints in Navius applications, covering authentication, authorization, input validation, and API-specific security concerns" category: "Guides" tags: ["security", "API", "REST", "endpoints", "validation", "rate limiting", "OWASP"] last_updated: "April 7, 2025" version: "1.0"
API Security Guide
Overview
This guide provides detailed instructions for securing API endpoints in Navius applications. APIs are critical entry points to your application that require robust security measures to protect your data and services.
API Security Fundamentals
Common API Security Threats
- Broken Authentication: Weak authentication allowing unauthorized access
- Excessive Data Exposure: Returning excessive data in API responses
- Broken Object Level Authorization: Improper access controls for resources
- Mass Assignment: Client-provided data modifying sensitive properties
- Injection Attacks: SQL, NoSQL, command injection via API inputs
- Improper Assets Management: Exposed debug endpoints or outdated APIs
- API Abuse: Excessive requests that impact availability
API Security Principles
- Defense in Depth: Multiple security layers
- Least Privilege: Limit access to necessary resources
- Zero Trust: Verify every request regardless of source
- Secure by Default: Security controls enabled by default
- Fail Securely: Errors default to secure state
Secure API Authentication
API Key Authentication
Configure API key authentication:
# config/default.yaml
api:
auth:
type: "apikey"
header_name: "X-API-Key"
key_validation: "database" # or "static", "redis"
rate_limiting:
enabled: true
limit: 100
window_seconds: 60
Implement API key validation:
#![allow(unused)] fn main() { use navius::api::auth::{ApiKeyValidator, DatabaseApiKeyValidator}; // Create API key validator let api_key_validator = DatabaseApiKeyValidator::new(db_pool).await?; // API key middleware async fn api_key_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Extract API key from header let api_key = req.headers() .get(state.config.api.auth.header_name.as_str()) .and_then(|h| h.to_str().ok()) .ok_or(StatusCode::UNAUTHORIZED)?; // Validate API key let client_info = state.api_key_validator .validate(api_key) .await .map_err(|_| StatusCode::UNAUTHORIZED)?; // Add client info to request extensions let mut req = req; req.extensions_mut().insert(client_info); // Continue to handler Ok(next.run(req).await) } // Generate new API key async fn generate_api_key( org_id: Uuid, permissions: Vec<String>, api_key_service: &ApiKeyService, ) -> Result<ApiKey, Error> { let api_key = api_key_service.generate(org_id, permissions).await?; Ok(api_key) } // Revoke API key async fn revoke_api_key( key_id: Uuid, api_key_service: &ApiKeyService, ) -> Result<(), Error> { api_key_service.revoke(key_id).await?; Ok(()) } }
Bearer Token Authentication
Implement JWT-based authentication:
#![allow(unused)] fn main() { use navius::api::auth::{JwtValidator, JwtConfig}; // Create JWT validator let jwt_config = JwtConfig { issuer: "navius-api".to_string(), audience: "navius-client".to_string(), key_id: "current-signing-key".to_string(), public_key_path: "/path/to/public.pem".to_string(), }; let jwt_validator = JwtValidator::new(jwt_config); // JWT middleware async fn jwt_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Extract token from Authorization header let token = req.headers() .get(HeaderName::from_static("authorization")) .and_then(|h| h.to_str().ok()) .and_then(|h| h.strip_prefix("Bearer ")) .ok_or(StatusCode::UNAUTHORIZED)?; // Validate JWT let claims = state.jwt_validator .validate(token) .map_err(|_| StatusCode::UNAUTHORIZED)?; // Add claims to request extensions let mut req = req; req.extensions_mut().insert(claims); // Continue to handler Ok(next.run(req).await) } }
OAuth 2.0 and OpenID Connect
Configure OAuth 2.0:
# config/default.yaml
api:
auth:
type: "oauth2"
provider: "entra"
entra:
tenant_id: "your-tenant-id"
client_id: "your-client-id"
jwks_uri: "https://login.microsoftonline.com/{tenant_id}/discovery/v2.0/keys"
issuer: "https://login.microsoftonline.com/{tenant_id}/v2.0"
audience: "api://your-app-id"
Implement OAuth 2.0 validation:
#![allow(unused)] fn main() { use navius::api::auth::{OAuth2Validator, OAuth2Config}; // Create OAuth2 validator let oauth2_config = OAuth2Config::from_config(&config)?; let oauth2_validator = OAuth2Validator::new(oauth2_config).await?; // OAuth2 middleware async fn oauth2_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Extract token from Authorization header let token = req.headers() .get(HeaderName::from_static("authorization")) .and_then(|h| h.to_str().ok()) .and_then(|h| h.strip_prefix("Bearer ")) .ok_or(StatusCode::UNAUTHORIZED)?; // Validate OAuth2 token let claims = state.oauth2_validator .validate(token) .await .map_err(|_| StatusCode::UNAUTHORIZED)?; // Add claims to request extensions let mut req = req; req.extensions_mut().insert(claims); // Continue to handler Ok(next.run(req).await) } }
API Authorization
Scopes and Permissions
Configure API scopes:
# config/default.yaml
api:
scopes:
- name: "users:read"
description: "Read user information"
- name: "users:write"
description: "Create or update users"
- name: "admin"
description: "Administrative access"
Implement scope-based authorization:
#![allow(unused)] fn main() { use navius::api::auth::{ScopeValidator, Claims}; // Scope validation middleware async fn scope_middleware( req: Request, next: Next, required_scopes: Vec<String>, ) -> Result<Response, StatusCode> { // Get claims from request extensions let claims = req.extensions() .get::<Claims>() .ok_or(StatusCode::UNAUTHORIZED)?; // Check if token has required scopes let has_scope = required_scopes.iter().any(|scope| { claims.scopes.contains(scope) }); if !has_scope { return Err(StatusCode::FORBIDDEN); } // Continue to handler Ok(next.run(req).await) } // Apply to routes let app = Router::new() .route("/users", get(get_users_handler)) .route_layer(middleware::from_fn(|req, next| { scope_middleware(req, next, vec!["users:read".to_string()]) })) .route("/users", post(create_user_handler)) .route_layer(middleware::from_fn(|req, next| { scope_middleware(req, next, vec!["users:write".to_string()]) })); }
Fine-grained API Permissions
Implement resource-based permissions:
#![allow(unused)] fn main() { use navius::api::auth::{PermissionValidator, ResourcePermission}; // Resource permission middleware async fn resource_permission_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Get claims from request extensions let claims = req.extensions() .get::<Claims>() .ok_or(StatusCode::UNAUTHORIZED)?; // Extract resource ID from request let resource_id = extract_resource_id(&req)?; // Determine action from method let action = match req.method() { &Method::GET => "read", &Method::POST => "create", &Method::PUT | &Method::PATCH => "update", &Method::DELETE => "delete", _ => "access", }; // Check if token has permission for this resource let permission = ResourcePermission { resource_type: "user", resource_id: Some(resource_id), action: action.to_string(), }; let has_permission = state.permission_validator .validate(claims, &permission) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; if !has_permission { return Err(StatusCode::FORBIDDEN); } // Continue to handler Ok(next.run(req).await) } }
Input Validation and Sanitization
Request Validation
Validate API requests:
#![allow(unused)] fn main() { use navius::api::validation::{Validator, ValidationRules}; use serde::{Deserialize, Serialize}; // Define validation schema #[derive(Debug, Deserialize, Serialize)] struct CreateUserRequest { #[validate(length(min = 3, max = 50))] username: String, #[validate(email)] email: String, #[validate(length(min = 8, max = 100), strong_password)] password: String, #[validate(phone)] phone: Option<String>, } // Create validator let validator = Validator::new(); // Request validation middleware async fn validate_request<T: DeserializeOwned + ValidatedRequest>( Json(payload): Json<T>, validator: &Validator, ) -> Result<Json<T>, StatusCode> { // Validate request validator.validate(&payload) .map_err(|_| StatusCode::BAD_REQUEST)?; Ok(Json(payload)) } // Use in handler async fn create_user_handler( State(state): State<AppState>, validated: ValidatedJson<CreateUserRequest>, ) -> impl IntoResponse { // Handler implementation with validated request let user = create_user(validated.0).await?; (StatusCode::CREATED, Json(user)) } }
Content Type Validation
Ensure correct content types:
#![allow(unused)] fn main() { // Content type validation middleware async fn content_type_middleware( req: Request, next: Next, allowed_types: Vec<&'static str>, ) -> Result<Response, StatusCode> { // Extract content type header let content_type = req.headers() .get(HeaderName::from_static("content-type")) .and_then(|h| h.to_str().ok()) .unwrap_or(""); // Check if content type is allowed let allowed = allowed_types.iter().any(|&t| content_type.starts_with(t)); if !allowed { return Err(StatusCode::UNSUPPORTED_MEDIA_TYPE); } // Continue to handler Ok(next.run(req).await) } // Apply to routes let app = Router::new() .route("/users", post(create_user_handler)) .route_layer(middleware::from_fn(|req, next| { content_type_middleware(req, next, vec!["application/json"]) })); }
API Schema Validation
Validate against OpenAPI schema:
#![allow(unused)] fn main() { use navius::api::validation::{OpenApiValidator, OpenApiConfig}; // Create OpenAPI validator let openapi_validator = OpenApiValidator::new(OpenApiConfig { schema_path: "/path/to/openapi.yaml".to_string(), validate_requests: true, validate_responses: true, }); // OpenAPI validation middleware async fn openapi_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Validate request against schema state.openapi_validator .validate_request(&req) .map_err(|_| StatusCode::BAD_REQUEST)?; // Call handler let response = next.run(req).await; // Validate response against schema state.openapi_validator .validate_response(&response) .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(response) } }
Rate Limiting and Throttling
Rate Limiting Configuration
Configure rate limiting:
# config/default.yaml
api:
rate_limiting:
enabled: true
strategies:
- type: "ip"
limit: 100
window_seconds: 60
- type: "user"
limit: 1000
window_seconds: 3600
- type: "token"
limit: 5000
window_seconds: 3600
Rate Limiting Implementation
Implement rate limiting:
#![allow(unused)] fn main() { use navius::api::protection::{RateLimiter, RateLimitStrategy, RateLimitConfig}; // Create rate limiter let rate_limiter = RateLimiter::new( RateLimitConfig::from_config(&config)?, redis_client, ); // Rate limiting middleware async fn rate_limit_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Get client identifier (IP, user ID, or token) let client_id = get_client_identifier(&req)?; // Get rate limit strategy based on client let strategy = state.rate_limiter.get_strategy_for_client(&client_id); // Check rate limit let result = state.rate_limiter .check(client_id, strategy) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; if !result.allowed { return Err(StatusCode::TOO_MANY_REQUESTS); } // Add rate limit headers to response let response = next.run(req).await; let response = add_rate_limit_headers(response, result); Ok(response) } // Add rate limit headers to response fn add_rate_limit_headers(mut response: Response, result: RateLimitResult) -> Response { let headers = response.headers_mut(); headers.insert( HeaderName::from_static("x-ratelimit-limit"), HeaderValue::from(result.limit.to_string()), ); headers.insert( HeaderName::from_static("x-ratelimit-remaining"), HeaderValue::from(result.remaining.to_string()), ); headers.insert( HeaderName::from_static("x-ratelimit-reset"), HeaderValue::from(result.reset.to_string()), ); response } }
Throttling for Specific Endpoints
Implement endpoint-specific throttling:
#![allow(unused)] fn main() { // Endpoint-specific rate limit middleware async fn endpoint_rate_limit_middleware( State(state): State<AppState>, req: Request, next: Next, endpoint: &str, limit: u64, window_seconds: u64, ) -> Result<Response, StatusCode> { // Get client identifier let client_id = get_client_identifier(&req)?; // Create endpoint-specific key let key = format!("{}:{}", endpoint, client_id); // Check custom rate limit let result = state.rate_limiter .check_custom(key, limit, window_seconds) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; if !result.allowed { return Err(StatusCode::TOO_MANY_REQUESTS); } // Continue to handler let response = next.run(req).await; let response = add_rate_limit_headers(response, result); Ok(response) } // Apply to specific endpoint let app = Router::new() .route("/password-reset", post(password_reset_handler)) .route_layer(middleware::from_fn_with_state(app_state.clone(), |req, next, state| { endpoint_rate_limit_middleware(State(state), req, next, "password-reset", 5, 3600) })); }
API Response Security
Data Minimization
Implement response filtering:
#![allow(unused)] fn main() { use navius::api::response::{ResponseFilter, FilterConfig}; // Create response filter let filter_config = FilterConfig { default_fields: vec!["id", "name", "created_at"], sensitive_fields: vec!["email", "phone", "address"], field_policies: HashMap::from([ ("users".to_string(), vec!["id", "username", "created_at"]), ("orders".to_string(), vec!["id", "status", "items", "total"]), ]), }; let response_filter = ResponseFilter::new(filter_config); // Filter responses async fn filter_response<T: Serialize>( data: T, resource_type: &str, fields: Option<Vec<String>>, filter: &ResponseFilter, ) -> Result<Json<Value>, StatusCode> { let filtered = filter .filter_response(data, resource_type, fields) .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(Json(filtered)) } // Use in handler async fn get_user_handler( State(state): State<AppState>, Path(user_id): Path<Uuid>, Query(params): Query<HashMap<String, String>>, ) -> impl IntoResponse { // Get user from database let user = get_user(user_id).await?; // Parse fields parameter let fields = params.get("fields").map(|f| { f.split(',').map(|s| s.trim().to_string()).collect() }); // Filter response let filtered = filter_response(user, "users", fields, &state.response_filter).await?; (StatusCode::OK, filtered) } }
Security Headers
Implement security headers:
#![allow(unused)] fn main() { use navius::api::security::ApiSecurityHeadersLayer; // Add API security headers let app = Router::new() .route("/", get(handler)) .layer(ApiSecurityHeadersLayer::new()); // Headers set: // - X-Content-Type-Options: nosniff // - Cache-Control: no-store // - Content-Security-Policy: default-src 'self' // - X-Frame-Options: DENY // - Strict-Transport-Security: max-age=31536000; includeSubDomains }
Safe Error Responses
Implement safe error handling:
#![allow(unused)] fn main() { use navius::api::error::{ApiError, ApiErrorResponse}; // Create API error async fn handle_api_error(error: ApiError) -> impl IntoResponse { let status = match error.kind { ApiErrorKind::NotFound => StatusCode::NOT_FOUND, ApiErrorKind::Validation => StatusCode::BAD_REQUEST, ApiErrorKind::Authentication => StatusCode::UNAUTHORIZED, ApiErrorKind::Authorization => StatusCode::FORBIDDEN, ApiErrorKind::RateLimit => StatusCode::TOO_MANY_REQUESTS, ApiErrorKind::Internal => StatusCode::INTERNAL_SERVER_ERROR, }; // Create safe error response let response = ApiErrorResponse { code: error.code, message: error.public_message, details: error.public_details, request_id: error.request_id, }; // Log detailed error information for debugging if error.kind == ApiErrorKind::Internal { error!(?error, "Internal API error"); } else { debug!(?error, "API error response"); } (status, Json(response)) } // Use in error handler async fn api_error_handler(error: BoxError) -> impl IntoResponse { if let Some(api_error) = error.downcast_ref::<ApiError>() { return handle_api_error(api_error.clone()).await; } // Convert other errors to internal API errors let api_error = ApiError::internal( "unexpected_error", "An unexpected error occurred", format!("{}", error), ); handle_api_error(api_error).await } }
Cross-Origin Resource Sharing (CORS)
CORS Configuration
Configure CORS:
# config/default.yaml
api:
cors:
enabled: true
allow_origins:
- "https://app.example.com"
- "https://admin.example.com"
allow_methods:
- "GET"
- "POST"
- "PUT"
- "DELETE"
allow_headers:
- "Authorization"
- "Content-Type"
expose_headers:
- "X-Request-ID"
max_age_seconds: 3600
allow_credentials: true
CORS Implementation
Implement CORS:
#![allow(unused)] fn main() { use navius::api::cors::{CorsLayer, CorsConfig}; use tower_http::cors::{CorsLayer as TowerCorsLayer, Any}; // Create CORS layer let cors_config = CorsConfig::from_config(&config)?; let cors_layer = if cors_config.enabled { let allowed_origins = cors_config.allow_origins .iter() .map(|origin| origin.parse().unwrap()) .collect::<Vec<_>>(); let allowed_methods = cors_config.allow_methods .iter() .map(|method| method.parse().unwrap()) .collect::<Vec<_>>(); let allowed_headers = cors_config.allow_headers .iter() .map(|header| header.parse().unwrap()) .collect::<Vec<_>>(); let exposed_headers = cors_config.expose_headers .iter() .map(|header| header.parse().unwrap()) .collect::<Vec<_>>(); Some( TowerCorsLayer::new() .allow_origin(allowed_origins) .allow_methods(allowed_methods) .allow_headers(allowed_headers) .expose_headers(exposed_headers) .max_age(Duration::from_secs(cors_config.max_age_seconds)) .allow_credentials(cors_config.allow_credentials) ) } else { None }; // Apply CORS layer if enabled let app = Router::new() .route("/", get(handler)); let app = if let Some(cors) = cors_layer { app.layer(cors) } else { app }; }
API Monitoring and Logging
Request Logging
Implement API request logging:
#![allow(unused)] fn main() { use navius::api::logging::{ApiLogger, LogConfig}; // Create API logger let log_config = LogConfig { request_headers: vec!["user-agent", "content-type", "accept"], response_headers: vec!["content-type", "cache-control"], log_body: false, log_query_params: true, mask_sensitive_headers: vec!["authorization", "x-api-key"], }; let api_logger = ApiLogger::new(log_config); // Logger middleware async fn api_logger_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Generate request ID if not present let request_id = get_or_generate_request_id(&req); // Log request let start_time = Instant::now(); state.api_logger.log_request(&req, request_id).await; // Process request let response = next.run(req).await; // Calculate duration let duration = start_time.elapsed(); // Log response state.api_logger.log_response(&response, request_id, duration).await; Ok(response) } }
Error Rate Monitoring
Implement error rate monitoring:
#![allow(unused)] fn main() { use navius::api::monitoring::{ErrorMonitor, AlertConfig}; // Create error monitor let error_monitor = ErrorMonitor::new( AlertConfig { error_threshold_percent: 5.0, window_seconds: 60, min_requests: 10, }, metrics_client, ); // Error monitoring middleware async fn error_monitor_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Process request let response = next.run(req).await; // Check if response is an error let is_error = response.status().is_client_error() || response.status().is_server_error(); // Record request result state.error_monitor .record(req.uri().path(), is_error) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(response) } }
API Metrics
Implement API metrics:
#![allow(unused)] fn main() { use navius::api::metrics::{ApiMetrics, MetricsConfig}; // Create API metrics let api_metrics = ApiMetrics::new( MetricsConfig { enabled: true, endpoint: "/metrics".to_string(), namespace: "navius_api".to_string(), }, prometheus_registry, ); // Metrics middleware async fn api_metrics_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // Extract path for grouping similar routes let path = normalize_path(req.uri().path()); let method = req.method().clone(); // Start timer let start_time = Instant::now(); // Process request let response = next.run(req).await; // Record metrics let status = response.status().as_u16(); let duration = start_time.elapsed(); state.api_metrics.record_request( &path, method.as_str(), status, duration.as_secs_f64(), ); Ok(response) } }
Security Testing for APIs
API Security Testing Tools
#![allow(unused)] fn main() { use navius::api::testing::{SecurityScanner, ScanConfig}; // Create security scanner let security_scanner = SecurityScanner::new( ScanConfig { target_url: "https://api.example.com".to_string(), api_schema_path: "/path/to/openapi.yaml".to_string(), auth_token: Some("test-token".to_string()), scan_types: vec!["injection", "authentication", "authorization"], }, ); // Run security scan async fn run_security_scan(scanner: &SecurityScanner) -> Result<ScanReport, Error> { let report = scanner.scan().await?; // Output scan results for vulnerability in &report.vulnerabilities { println!( "Vulnerability: {} (Severity: {})", vulnerability.name, vulnerability.severity ); println!("Endpoint: {}", vulnerability.endpoint); println!("Description: {}", vulnerability.description); println!("Remediation: {}", vulnerability.remediation); println!(); } Ok(report) } }
API Fuzz Testing
Implement fuzz testing:
#![allow(unused)] fn main() { use navius::api::testing::{FuzzTester, FuzzConfig}; // Create fuzz tester let fuzz_tester = FuzzTester::new( FuzzConfig { target_url: "https://api.example.com".to_string(), api_schema_path: "/path/to/openapi.yaml".to_string(), auth_token: Some("test-token".to_string()), iterations: 1000, payloads_path: "/path/to/fuzz-payloads.txt".to_string(), }, ); // Run fuzz tests async fn run_fuzz_tests(tester: &FuzzTester) -> Result<FuzzReport, Error> { let report = tester.run().await?; // Output fuzz test results for issue in &report.issues { println!("Issue: {}", issue.description); println!("Endpoint: {}", issue.endpoint); println!("Request: {:?}", issue.request); println!("Response: {}", issue.response.status); println!(); } Ok(report) } }
API Security Best Practices
API Security Checklist
-
Authentication and Authorization
- Implement secure authentication (API keys, JWT, OAuth)
- Use proper authorization for all endpoints
- Implement token validation and revocation
-
Input Validation and Sanitization
- Validate all input parameters
- Sanitize data to prevent injection attacks
- Validate content types and schemas
-
Rate Limiting and Resource Protection
- Implement rate limiting for all endpoints
- Set appropriate timeouts for all operations
- Limit payload sizes
-
Response Security
- Return minimal data in responses
- Use appropriate security headers
- Return safe error messages
-
Transport Security
- Enforce HTTPS for all API communications
- Configure proper TLS settings
- Implement CORS properly
-
Logging and Monitoring
- Log all API access and errors
- Monitor for suspicious activity
- Set up alerts for security incidents
-
API Lifecycle Management
- Version API endpoints
- Deprecate and retire APIs safely
- Document security requirements
Secure API Design Principles
-
Design for Least Privilege
- Each API endpoint should require minimal permissions
- Scope access tokens to specific resources
-
Avoid Exposing Implementation Details
- Hide internal identifiers when possible
- Avoid leaking stack traces or internal error messages
-
Secure Parameter Handling
- Always validate query parameters and request bodies
- Use parameterized queries for database operations
-
Always Verify on Server
- Never trust client-side validation
- Revalidate all data server-side regardless of client validation
Troubleshooting API Security Issues
Common API Security Issues
-
Authentication Failures
- Invalid or expired tokens
- Missing credentials
- Incorrect API key format
-
Authorization Problems
- Missing permissions
- Incorrect scopes
- Resource access denied
-
Rate Limiting Issues
- Too many requests
- Inconsistent rate limit application
- Rate limit bypass attempts
-
Input Validation Failures
- Malformed input data
- Injection attack attempts
- Schema validation errors
Debugging API Security
#![allow(unused)] fn main() { // Enable detailed logging for API security components tracing_subscriber::fmt() .with_env_filter("navius::api::auth=debug,navius::api::validation=debug") .init(); // Create test tokens for debugging async fn create_debug_token( claims: HashMap<String, Value>, jwt_service: &JwtService, ) -> Result<String, Error> { let token = jwt_service.create_token(claims).await?; Ok(token) } }
Related Resources
- Authentication Implementation Guide
- Authorization Guide
- Data Protection Guide
- Security Best Practices
- OWASP API Security Top 10
title: "Performance Guides" description: "Comprehensive guides for optimizing performance in Navius applications, including database optimization, caching strategies, and resource management" category: "Guides" tags: ["performance", "optimization", "database", "caching", "migrations", "tuning"] last_updated: "April 5, 2025" version: "1.0"
Performance Guides
This section contains comprehensive guides for optimizing the performance of Navius applications. These guides cover various aspects of performance tuning, from database optimization to caching strategies and resource management.
Available Guides
Core Performance Guides
- Performance Tuning Guide - Comprehensive strategies for optimizing Navius applications
- Database Optimization Guide - Optimizing PostgreSQL database performance
- Migrations Guide - Managing database schema changes efficiently
Load Testing and Benchmarking
- Load Testing Guide - Strategies for testing applications under load
- Benchmarking Guide - Setting up and interpreting benchmarks
Performance Best Practices
When optimizing Navius applications, follow these best practices:
- Measure First - Establish baseline metrics before optimization
- Target Bottlenecks - Focus on the most significant performance constraints
- Incremental Improvements - Make small, measurable changes
- Test Thoroughly - Verify optimizations with realistic workloads
- Monitor Continuously - Track performance metrics over time
Performance Optimization Workflow
For effective performance optimization, follow this workflow:
- Profile and Identify - Use profiling tools to identify bottlenecks
- Analyze - Determine the root cause of performance issues
- Optimize - Implement targeted improvements
- Verify - Measure and confirm performance gains
- Document - Record optimization strategies for future reference
Key Performance Areas
Database Performance
Database operations often represent the most significant performance bottleneck in applications. Optimize:
- Query execution time
- Connection management
- Index usage
- Transaction handling
Learn more in the Database Optimization Guide.
Memory Management
Efficient memory usage is crucial for application performance:
- Minimize allocations in hot paths
- Use appropriate data structures
- Implement caching strategically
- Monitor memory growth
Concurrency
Optimize async code execution:
- Configure appropriate thread pools
- Avoid blocking operations in async code
- Implement backpressure mechanisms
- Balance parallelism with overhead
Network I/O
Minimize network overhead:
- Batch API requests
- Implement connection pooling
- Use appropriate timeouts
- Consider compression for large payloads
Getting Started with Performance Optimization
If you're new to performance optimization, we recommend following this learning path:
- Start with the Performance Tuning Guide for a comprehensive overview
- Dive into Database Optimization for database-specific strategies
- Learn about efficient schema changes in the Migrations Guide
- Implement appropriate caching with the Caching Strategies Guide
Related Resources
- Caching Strategies Guide - Advanced caching techniques
- PostgreSQL Integration Guide - PostgreSQL integration strategies
- Deployment Guide - Deploying optimized applications
- Configuration Guide - Configuring for optimal performance
- Two-Tier Cache Example - Implementation example for advanced caching
title: "Performance Tuning Guide" description: "Comprehensive guide for optimizing the performance of Navius applications, including database, memory, caching, and network optimizations" category: "Guides" tags: ["performance", "optimization", "database", "caching", "memory", "profiling", "benchmarking"] last_updated: "April 5, 2025" version: "1.0"
Performance Tuning Guide
Overview
This guide provides comprehensive strategies for optimizing the performance of Navius applications. Performance tuning is essential for ensuring that your application responds quickly, uses resources efficiently, and can handle increasing loads as your user base grows.
Key Performance Areas
When optimizing a Navius application, focus on these key areas:
- Database Performance - Optimizing queries and database access patterns
- Caching Strategies - Implementing effective caching to reduce database load
- Memory Management - Minimizing memory usage and preventing leaks
- Concurrency - Optimizing async code execution and thread management
- Network I/O - Reducing latency in network operations
- Resource Utilization - Balancing CPU, memory, and I/O operations
Performance Measurement
Benchmarking Tools
Before optimizing, establish baseline performance metrics using these tools:
- Criterion - Rust benchmarking library
- wrk - HTTP benchmarking tool
- Prometheus - Metrics collection and monitoring
- Grafana - Visualization of performance metrics
Key Metrics to Track
- Request latency (p50, p95, p99 percentiles)
- Throughput (requests per second)
- Error rates
- Database query times
- Memory usage
- CPU utilization
- Garbage collection pauses
- Cache hit/miss ratios
Database Optimization
Query Optimization
- Use the PostgreSQL query planner with
EXPLAIN ANALYZE
- Optimize indexes for common query patterns
- Review and refine complex joins
- Consider materialized views for complex aggregations
#![allow(unused)] fn main() { // Example: Using an index hint in a query let users = sqlx::query_as::<_, User>("SELECT /*+ INDEX(users idx_email) */ * FROM users WHERE email LIKE $1") .bind(format!("{}%", email_prefix)) .fetch_all(&pool) .await?; }
Connection Pooling
- Configure appropriate connection pool sizes
- Monitor connection usage patterns
- Implement backpressure mechanisms
#![allow(unused)] fn main() { // Optimal connection pooling configuration let pool = PgPoolOptions::new() .max_connections(num_cpus::get() * 2) // Rule of thumb: 2x CPU cores .min_connections(5) .max_lifetime(std::time::Duration::from_secs(30 * 60)) // 30 minutes .idle_timeout(std::time::Duration::from_secs(10 * 60)) // 10 minutes .connect(&database_url) .await?; }
Database Access Patterns
- Implement the repository pattern for efficient data access
- Use batch operations where appropriate
- Consider read/write splitting for high-load applications
Caching Strategies
Multi-Level Caching
Implement the Navius two-tier caching pattern:
- L1 Cache - In-memory cache for frequently accessed data
- L2 Cache - Redis for distributed caching and persistence
#![allow(unused)] fn main() { // Two-tier cache configuration cache: enabled: true providers: - name: "memory" type: "memory" max_items: 10000 ttl_seconds: 60 - name: "redis" type: "redis" connection_string: "redis://localhost:6379" ttl_seconds: 300 default_provider: "memory" fallback_provider: "redis" }
Optimizing Cache Usage
- Cache expensive database operations
- Use appropriate TTL values based on data volatility
- Implement cache invalidation strategies
- Consider cache warming for critical data
Memory Optimization
Rust Memory Management
- Use appropriate data structures to minimize allocations
- Leverage Rust's ownership model to prevent memory leaks
- Consider using
Arc
instead of cloning large objects - Profile memory usage with tools like heaptrack or valgrind
Leak Prevention
- Implement proper resource cleanup in drop implementations
- Use structured concurrency patterns
- Monitor memory growth over time
Concurrency Optimization
Async Runtime Configuration
- Configure Tokio runtime with appropriate thread count
- Use work-stealing runtime for balanced load distribution
#![allow(unused)] fn main() { // Configure the Tokio runtime let runtime = tokio::runtime::Builder::new_multi_thread() .worker_threads(num_cpus::get()) .enable_all() .build() .unwrap(); }
Task Management
- Break large tasks into smaller, manageable chunks
- Implement backpressure for task submission
- Use appropriate buffer sizes for channels
- Consider using
spawn_blocking
for CPU-intensive tasks
Network I/O Optimization
HTTP Client Configuration
- Configure appropriate timeouts
- Use connection pooling
- Implement retry strategies with exponential backoff
- Consider enabling HTTP/2 for multiplexing
#![allow(unused)] fn main() { // Efficient HTTP client configuration let client = reqwest::Client::builder() .timeout(std::time::Duration::from_secs(30)) .pool_max_idle_per_host(10) .connect_timeout(std::time::Duration::from_secs(5)) .build()?; }
Server Configuration
- Tune server parameters based on hardware resources
- Configure appropriate worker threads
- Implement connection timeouts
- Consider using compression middleware
Resource Utilization
CPU Optimization
- Profile CPU hotspots with tools like flamegraph
- Optimize critical paths identified in profiling
- Use parallel processing for CPU-intensive operations
- Consider using SIMD instructions for data processing
I/O Optimization
- Batch database operations where possible
- Use buffered I/O for file operations
- Minimize disk I/O with appropriate caching
- Consider using async I/O for file operations
Case Study: Optimizing a Navius API Service
Here's a real-world example of performance optimization for a Navius API service:
Initial Performance
- 100 requests/second
- 250ms average latency
- 95th percentile latency: 500ms
- Database CPU: 70%
Optimization Steps
- Added proper indexes for common queries
- Implemented two-tier caching
- Optimized connection pool settings
- Added query timeouts
- Implemented data pagination
Results
- 500 requests/second (5x improvement)
- 50ms average latency (5x improvement)
- 95th percentile latency: 100ms (5x improvement)
- Database CPU: 40% (despite higher throughput)
Performance Tuning Workflow
Follow this systematic approach to performance tuning:
- Measure - Establish baseline performance metrics
- Profile - Identify bottlenecks
- Optimize - Implement targeted optimizations
- Validate - Measure performance improvements
- Iterate - Continue the cycle
Common Pitfalls
- Premature optimization
- Optimizing without measuring
- Over-caching (which can lead to stale data)
- Neglecting resource cleanup
- Not considering the cost of serialization/deserialization
Related Resources
- Caching Strategies Guide
- Database Optimization Guide
- Two-Tier Cache Example
- PostgreSQL Integration Guide
- Error Handling Guide
title: "Database Optimization Guide" description: "Comprehensive guide for optimizing PostgreSQL database performance in Navius applications, including indexing, query optimization, and schema design" category: "Guides" tags: ["database", "postgresql", "optimization", "performance", "indexing", "queries", "schema"] last_updated: "April 5, 2025" version: "1.0"
Database Optimization Guide
Overview
This guide provides comprehensive strategies for optimizing PostgreSQL database performance in Navius applications. Database performance is critical for application responsiveness and scalability, as database operations often represent the most significant bottleneck in web applications.
Database Design Principles
Schema Design
- Normalize with purpose - Follow normalization principles but prioritize query performance
- Choose appropriate data types - Use the most efficient data types for each column
- Limit column width - Use varchar with appropriate length limits instead of unlimited text fields
- Consider table partitioning - For very large tables (millions of rows)
Index Design
- Primary keys - Always define explicit primary keys
- Foreign keys - Index all foreign key columns
- Compound indexes - Create for commonly queried column combinations
- Cover indexes - Include additional columns to create covering indexes
- Partial indexes - Use for filtered queries on large tables
-- Example: Compound index for a commonly used query pattern
CREATE INDEX idx_users_email_status ON users (email, status);
-- Example: Covering index to avoid table lookups
CREATE INDEX idx_posts_author_created_title ON posts (author_id, created_at) INCLUDE (title);
-- Example: Partial index for active users
CREATE INDEX idx_active_users ON users (email, last_login) WHERE status = 'active';
Query Optimization
Query Analysis
Use EXPLAIN ANALYZE
to understand query execution plans:
EXPLAIN ANALYZE SELECT * FROM users
WHERE email LIKE 'user%' AND status = 'active'
ORDER BY created_at DESC LIMIT 10;
Look for these issues in query plans:
- Sequential scans on large tables
- High cost operations
- Unused indexes
- Poor join performance
Common Optimizations
1. Avoid SELECT *
-- Instead of this
SELECT * FROM users WHERE id = 1;
-- Do this
SELECT id, email, name, created_at FROM users WHERE id = 1;
2. Use Parameterized Queries
#![allow(unused)] fn main() { // Instead of string interpolation let users = sqlx::query_as::<_, User>("SELECT id, name FROM users WHERE email = $1 AND status = $2") .bind(email) .bind(status) .fetch_all(&pool) .await?; }
3. Batch Operations
#![allow(unused)] fn main() { // Instead of multiple single inserts let mut transaction = pool.begin().await?; for user in users { sqlx::query("INSERT INTO user_logs (user_id, action, timestamp) VALUES ($1, $2, $3)") .bind(user.id) .bind("login") .bind(Utc::now()) .execute(&mut transaction) .await?; } transaction.commit().await?; }
4. Use Appropriate WHERE Clauses
-- Instead of functions on indexed columns
SELECT * FROM users WHERE LOWER(email) = '[email protected]';
-- Do this
SELECT * FROM users WHERE email = '[email protected]';
Connection Management
Connection Pooling
Configure your connection pool appropriately:
#![allow(unused)] fn main() { // Optimal connection pool configuration for Navius applications let pool = PgPoolOptions::new() .max_connections(num_cpus::get() * 4) // 4x CPU cores as a starting point .min_connections(num_cpus::get()) // Maintain at least one connection per CPU .max_lifetime(std::time::Duration::from_secs(30 * 60)) // 30 minutes .idle_timeout(std::time::Duration::from_secs(5 * 60)) // 5 minutes .connect(&database_url) .await?; }
Guidelines for sizing:
- Measure maximum concurrent database operations during peak load
- Consider PostgreSQL's
max_connections
setting (usually 100-300) - Monitor connection usage over time
Transaction Management
- Keep transactions as short as possible
- Don't perform I/O or network operations within transactions
- Use appropriate isolation levels
- Consider using read-only transactions for queries
#![allow(unused)] fn main() { // Example: Read-only transaction let mut tx = pool.begin_read_only().await?; let users = sqlx::query_as::<_, User>("SELECT id, name FROM users WHERE status = $1") .bind("active") .fetch_all(&mut tx) .await?; tx.commit().await?; }
Advanced Optimization Techniques
PostgreSQL Configuration
Key PostgreSQL settings to tune:
# Memory settings
shared_buffers = 25% of system RAM (up to 8GB)
work_mem = 32-64MB
maintenance_work_mem = 256MB
# Checkpoint settings
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
# Planner settings
random_page_cost = 1.1 (for SSD storage)
effective_cache_size = 75% of system RAM
Materialized Views
For expensive reports or analytics queries:
CREATE MATERIALIZED VIEW user_stats AS
SELECT
date_trunc('day', created_at) as day,
count(*) as new_users,
avg(extract(epoch from now() - last_login)) as avg_days_since_login
FROM users
GROUP BY date_trunc('day', created_at);
-- Refresh the view:
REFRESH MATERIALIZED VIEW user_stats;
Database Monitoring
Monitor these metrics:
- Query execution times
- Index usage statistics
- Cache hit ratios
- Lock contention
- Deadlocks
Tools to use:
Implementing Repository Pattern in Navius
The Repository pattern helps maintain clean database access and makes queries easier to optimize:
#![allow(unused)] fn main() { // User repository implementation pub struct UserRepository { pool: PgPool, } impl UserRepository { pub fn new(pool: PgPool) -> Self { Self { pool } } pub async fn find_by_email(&self, email: &str) -> Result<Option<User>, sqlx::Error> { sqlx::query_as::<_, User>("SELECT id, name, email, status FROM users WHERE email = $1 LIMIT 1") .bind(email) .fetch_optional(&self.pool) .await } pub async fn find_active_users(&self, limit: i64, offset: i64) -> Result<Vec<User>, sqlx::Error> { sqlx::query_as::<_, User>( "SELECT id, name, email, status FROM users WHERE status = 'active' ORDER BY last_login DESC LIMIT $1 OFFSET $2" ) .bind(limit) .bind(offset) .fetch_all(&self.pool) .await } // Additional methods... } }
Performance Testing Database Queries
Benchmarking Strategies
- Isolated Query Testing - Test queries independently from application
- Mock Production Data - Use production-sized datasets
- Concurrent Load Testing - Test under simultaneous connections
- EXPLAIN ANALYZE - Measure execution plan costs
- Cache Warmup/Cooldown - Test with both hot and cold cache scenarios
Testing with Criterion
#![allow(unused)] fn main() { fn benchmark_user_query(c: &mut Criterion) { let rt = Runtime::new().unwrap(); let pool = rt.block_on(establish_connection()); let repo = UserRepository::new(pool.clone()); c.bench_function("find_active_users", |b| { b.iter(|| { rt.block_on(repo.find_active_users(100, 0)) }) }); } }
Common Database Performance Issues
N+1 Query Problem
#![allow(unused)] fn main() { // BAD: N+1 query problem let users = repo.find_active_users(100, 0).await?; for user in &users { let posts = repo.find_posts_by_user_id(user.id).await?; // Process posts... } // GOOD: Single query with join let user_with_posts = repo.find_active_users_with_posts(100, 0).await?; }
Missing Indexes
Signs of missing indexes:
- Sequential scans on large tables
- Slow filtering operations
- Slow ORDER BY or GROUP BY clauses
Oversized Queries
- Fetching unnecessary columns
- Not using LIMIT with large result sets
- Not using pagination
- Using subqueries when joins would be more efficient
Database Migration Strategies
When migrating or updating schemas:
-
Create indexes concurrently
CREATE INDEX CONCURRENTLY idx_users_status ON users (status);
-
Perform updates in batches
#![allow(unused)] fn main() { // Update in batches of 1000 for batch in user_ids.chunks(1000) { sqlx::query("UPDATE users SET status = $1 WHERE id = ANY($2)") .bind("inactive") .bind(batch) .execute(&pool) .await?; } }
-
Use temporary tables for complex migrations
CREATE TEMPORARY TABLE temp_users AS SELECT * FROM users WHERE false; INSERT INTO temp_users SELECT * FROM users WHERE <condition>; -- Perform operations on temp_users -- Finally update or insert back to users
Case Study: Optimizing a High-Traffic User Service
Initial State
- Average query time: 150ms
- Database CPU: 85% utilization
- Cache hit ratio: 45%
- Frequent timeouts during peak traffic
Optimization Steps
- Added compound indexes on commonly queried fields
- Implemented result caching for frequent queries
- Optimized schema removing unused columns
- Implemented connection pooling with optimal settings
- Added database replicas for read operations
Results
- Average query time: 15ms (10x improvement)
- Database CPU: 40% utilization
- Cache hit ratio: 80%
- Zero timeouts during peak traffic
Related Resources
- PostgreSQL Integration Guide
- Performance Tuning Guide
- Caching Strategies Guide
- Database Migration Guide
- Two-Tier Cache Example
title: "README" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
title: Deployment Guides description: "Comprehensive guides for deploying Navius applications to production environments, including AWS, Docker, and Kubernetes deployments" category: guides tags:
- deployment
- aws
- docker
- kubernetes
- production
- security
- monitoring
- infrastructure related:
- ../README.md
- ../../reference/architecture/principles.md
- ../features/authentication.md last_updated: March 27, 2025 version: 1.0
Deployment Guides
This section provides comprehensive guidance for deploying Navius applications to production environments. Our deployment guides cover various deployment strategies, from simple setups to complex cloud-native architectures.
Getting Started
For most applications, we recommend following this deployment progression:
- Production Deployment Basics - Essential production deployment concepts
- Docker Deployment - Containerizing your application
- AWS Deployment - Deploying to AWS cloud
- Kubernetes Deployment - Advanced container orchestration
Available Guides
Core Deployment
- Production Deployment - Comprehensive production deployment guide
- Security Checklist - Essential security measures for production
- Environment Configuration - Managing environment variables and configs
Container Deployment
- Docker Deployment - Containerizing Navius applications
- Kubernetes Deployment - Orchestrating containers with Kubernetes
- Container Best Practices - Docker and Kubernetes best practices
Cloud Deployment
- AWS Deployment - Deploying to Amazon Web Services
- AWS RDS Setup - Setting up PostgreSQL on AWS RDS
- AWS ElastiCache - Configuring Redis on AWS ElastiCache
Monitoring and Operations
- Monitoring Setup - Setting up application monitoring
- Logging Best Practices - Implementing effective logging
- Performance Optimization - Tuning application performance
Deployment Checklist
Before deploying to production, ensure:
-
Security
- Authentication is properly configured
- SSL/TLS certificates are set up
- Secrets management is implemented
- Security headers are configured
-
Infrastructure
- Database backups are configured
- Redis persistence is set up
- Load balancing is implemented
- Auto-scaling is configured
-
Monitoring
- Application metrics are tracked
- Error tracking is implemented
- Performance monitoring is set up
- Alerts are configured
-
Operations
- Deployment pipeline is tested
- Rollback procedures are documented
- Backup restoration is tested
- Documentation is updated
Related Resources
- Architecture Principles - Core architectural concepts
- Configuration Guide - Environment setup
- Authentication Guide - Security implementation
- PostgreSQL Integration - Database setup
Need Help?
If you encounter deployment issues:
- Check the troubleshooting section in each deployment guide
- Review our Deployment FAQs
- Join our Discord Community for real-time help
- Open an issue on our GitHub repository
title: "Navius Deployment Guide" description: "A comprehensive guide to deploying Navius applications in production environments, covering AWS deployment, Docker containerization, security considerations, and monitoring setup" category: guides tags:
- deployment
- aws
- docker
- kubernetes
- monitoring
- security
- ci-cd
- infrastructure related:
- aws-deployment.md
- docker-deployment.md
- kubernetes-deployment.md
- ../../05_reference/configuration/environment-variables.md
- ../features/authentication.md last_updated: April 1, 2025 version: 1.0
Navius Deployment Guide
This guide provides comprehensive instructions for deploying Navius applications to various environments, focusing on production-grade deployments with security, scalability, and reliability.
Deployment Options
Navius applications can be deployed in multiple ways, depending on your infrastructure preferences:
Docker Containers
Navius excels in containerized environments, offering minimal resource usage and fast startup times.
Docker Deployment Example
# Build the Docker image
docker build -t navius:latest .
# Run the container
docker run -d \
-p 8080:8080 \
-e CONFIG_PATH=/etc/navius/config \
-e RUST_LOG=info \
-v /host/path/to/config:/etc/navius/config \
--name navius-api \
navius:latest
Kubernetes
Navius applications are ideal for Kubernetes due to their small footprint and rapid startup time.
Kubernetes Deployment Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: navius-api
labels:
app: navius-api
spec:
replicas: 3
selector:
matchLabels:
app: navius-api
template:
metadata:
labels:
app: navius-api
spec:
containers:
- name: api
image: your-registry/navius:latest
ports:
- containerPort: 8080
env:
- name: RUST_LOG
value: "info"
- name: CONFIG_PATH
value: "/etc/navius/config"
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: "0.5"
memory: "512Mi"
requests:
cpu: "0.1"
memory: "128Mi"
Service Definition
apiVersion: v1
kind: Service
metadata:
name: navius-api
spec:
selector:
app: navius-api
ports:
- port: 80
targetPort: 8080
type: ClusterIP
AWS Deployment
Navius is optimized for deploying on AWS infrastructure, with built-in integrations for many AWS services.
AWS Deployment with CloudFormation
# AWS CloudFormation template example (simplified)
Resources:
NaviusApiInstance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t3.micro
ImageId: ami-0abcdef1234567890
UserData:
Fn::Base64: !Sub |
#!/bin/bash
amazon-linux-extras install docker
systemctl start docker
docker run -d -p 80:8080 your-registry/navius:latest
Serverless Deployment (AWS Lambda)
Navius supports serverless deployment via AWS Lambda, offering extremely fast cold-start times compared to JVM-based alternatives:
# Deploy using Serverless Framework
serverless deploy
Bare Metal Deployment
For maximum performance, Navius can be deployed directly to bare metal servers:
Manual Deployment Process
# SSH to your server
ssh user@your-server
# Create directories
sudo mkdir -p /opt/navius/config
# Copy binary and configuration
scp target/release/navius user@server:/opt/navius/
Systemd Service Configuration
# /etc/systemd/system/navius.service
[Unit]
Description=Navius API Server
After=network.target
[Service]
User=navius
WorkingDirectory=/opt/navius
ExecStart=/opt/navius/navius
Restart=on-failure
Environment=RUST_LOG=info
[Install]
WantedBy=multi-user.target
Production Configuration
Environment Variables
For production deployments, configure these essential environment variables:
# Core settings
RUN_ENV=production
RUST_LOG=info
PORT=3000
# Database settings
DATABASE_URL=postgres://user:password@host:port/db
DATABASE_MAX_CONNECTIONS=20
DATABASE_CONNECT_TIMEOUT_SECONDS=5
# Cache settings
REDIS_URL=redis://user:password@host:port
CACHE_TTL_SECONDS=3600
# Security settings
JWT_SECRET=your-secure-jwt-secret
CORS_ALLOWED_ORIGINS=https://yourdomain.com
Recommended Infrastructure
For production deployments, we recommend:
- Database: AWS RDS PostgreSQL or Aurora
- Cache: AWS ElastiCache Redis
- Storage: AWS S3
- CDN: AWS CloudFront
- Load Balancer: AWS Application Load Balancer with TLS termination
Performance Tuning
Navius is designed for high performance, but these optimizations can help in production:
Thread Pool Sizing
Configure thread pools according to your CPU resources:
TOKIO_WORKER_THREADS=number_of_cores * 2
Connection Pool Tuning
Optimize database connection pools:
DATABASE_MAX_CONNECTIONS=25
DATABASE_MIN_IDLE=5
DATABASE_IDLE_TIMEOUT_SECONDS=300
Memory Limits
Configure the JVM (if running under the JVM):
RUST_MAX_MEMORY=512m
Monitoring & Observability
Navius provides built-in observability features:
Prometheus Metrics
Metrics are available at the /metrics
endpoint. Configure Prometheus to scrape this endpoint.
Health Checks
Health checks are available at:
/health
- Basic health check/actuator/health
- Detailed component health
Logging
Navius uses structured logging with tracing. Configure log aggregation with:
- AWS CloudWatch Logs
- ELK Stack (Elasticsearch, Logstash, Kibana)
- Datadog
- New Relic
Example log configuration:
RUST_LOG=info,navius=debug
LOG_FORMAT=json
Scaling Strategies
Navius applications can scale both vertically and horizontally:
Vertical Scaling
Navius is extremely efficient with resources. For many applications, a modest instance size is sufficient:
- AWS: t3.small or t3.medium
- GCP: e2-standard-2
- Azure: Standard_B2s
Horizontal Scaling
For high-traffic applications, horizontal scaling is recommended:
- Deploy multiple instances behind a load balancer
- Configure sticky sessions if using server-side sessions
- Ensure all state is stored in shared resources (database, Redis)
Security Best Practices
TLS Configuration
Always use TLS in production. Configure your load balancer or reverse proxy with:
- TLS 1.2/1.3 only
- Strong cipher suites
- HTTP/2 support
- HSTS headers
Firewall Rules
Restrict access to your instances:
- Allow only necessary ports
- Implement network segmentation
- Use security groups (AWS) or equivalent
Regular Updates
Keep your Navius application updated:
cargo update
cargo build --release
CI/CD Pipeline Integration
Navius works well with modern CI/CD pipelines:
GitHub Actions
name: Deploy to Production
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
- name: Build
run: cargo build --release
- name: Run tests
run: cargo test
- name: Build Docker image
run: docker build -t yourdockerhub/navius:${{ github.sha }} .
- name: Push Docker image
run: |
docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
docker push yourdockerhub/navius:${{ github.sha }}
- name: Deploy to ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: task-definition.json
service: navius-service
cluster: navius-cluster
image: yourdockerhub/navius:${{ github.sha }}
GitLab CI
stages:
- build
- test
- deploy
build:
stage: build
image: rust:latest
script:
- cargo build --release
artifacts:
paths:
- target/release/navius
test:
stage: test
image: rust:latest
script:
- cargo test
deploy:
stage: deploy
image: docker:latest
services:
- docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- kubectl set image deployment/navius navius=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
only:
- main
Database Migrations
For database migrations, Navius integrates with SQLx migrations:
# Create a new migration
cargo sqlx migrate add create_users_table
# Run migrations (automatic during application startup)
cargo sqlx migrate run
In production, migrations can be run:
- Automatically on application startup
- Via a dedicated migration job before deployment
- Manually in controlled environments
Troubleshooting Common Issues
High Memory Usage
Symptom: Memory usage grows over time
Solution: Check for resource leaks, particularly in custom code that holds onto resources
Slow Startup
Symptom: Application takes a long time to start
Solution: Enable the --release
flag, use the minimal Docker image, or precompile Rust code
Database Connection Issues
Symptom: Application fails to connect to the database
Solution: Verify connection strings, network connectivity, and firewall rules
Conclusion
Navius's efficient resource usage, fast startup time, and resilient design make it an excellent choice for production deployments of any scale. By following the recommendations in this guide, you can ensure your application performs optimally in production environments.
Related Documents
- Installation Guide - How to install the application
- Development Workflow - Development best practices
title: "Docker Deployment Guide for Navius" description: "Comprehensive guide for containerizing and deploying Navius applications using Docker with best practices for configuration, optimization, and security" category: "guides" tags:
- docker
- deployment
- containers
- devops
- containerization related:
- production-deployment.md
- kubernetes-deployment.md
- ../../05_reference/configuration/environment-variables.md
- ../operations/security.md last_updated: "April 1, 2025" version: "1.0"
Docker Deployment Guide for Navius
Overview
This guide provides comprehensive instructions for containerizing and deploying Navius applications using Docker. Navius is particularly well-suited for containerization due to its small footprint, minimal dependencies, and fast startup time.
Prerequisites
Before containerizing your Navius application, ensure you have:
- Docker installed on your development machine (version 20.10.0 or later)
- A Navius application codebase
- Access to a Docker registry (Docker Hub, GitLab Container Registry, etc.)
- Basic understanding of Docker concepts
Dockerfile
Basic Dockerfile
Create a Dockerfile
in the root of your project with the following content:
# Build stage
FROM rust:1.72-slim as builder
WORKDIR /usr/src/navius
COPY . .
RUN apt-get update && apt-get install -y pkg-config libssl-dev && \
cargo build --release
# Runtime stage
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates libssl-dev && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /usr/src/navius/target/release/navius /app/
COPY --from=builder /usr/src/navius/config /app/config
ENV CONFIG_PATH=/app/config
EXPOSE 8080
CMD ["./navius"]
Multi-Stage Build Explanation
This Dockerfile uses a multi-stage build approach:
-
Builder Stage:
- Uses the Rust image to compile the application
- Installs necessary build dependencies
- Compiles the application with optimizations
-
Runtime Stage:
- Uses a minimal Debian image for runtime
- Copies only the compiled binary and configuration files
- Installs only the runtime dependencies
- Results in a much smaller final image
Building the Docker Image
Basic Build Command
Build your Docker image with:
docker build -t navius:latest .
Optimized Build
For a production-ready build with additional metadata:
docker build \
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg VERSION=$(git describe --tags --always) \
--build-arg COMMIT_HASH=$(git rev-parse HEAD) \
-t navius:latest \
-t navius:$(git describe --tags --always) \
.
Cross-Platform Building
To build for multiple platforms using Docker BuildX:
docker buildx create --name multiplatform --use
docker buildx build --platform linux/amd64,linux/arm64 -t yourregistry/navius:latest --push .
Configuration
Environment Variables
Navius applications typically use environment variables for configuration. When running in Docker, set these variables using the -e
flag:
docker run -e DATABASE_URL=postgres://user:pass@host/db -e RUST_LOG=info navius:latest
Configuration Files
For more complex configuration, mount a configuration directory:
docker run -v /host/path/to/config:/app/config navius:latest
Docker Compose Setup
For a complete setup with dependencies, create a docker-compose.yml
file:
version: '3.8'
services:
navius-api:
image: navius:latest
build: .
ports:
- "8080:8080"
environment:
- RUST_LOG=info
- CONFIG_PATH=/app/config
- DATABASE_URL=postgres://postgres:postgres@postgres:5432/navius
- REDIS_URL=redis://redis:6379
volumes:
- ./config:/app/config
depends_on:
- postgres
- redis
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=navius
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
volumes:
postgres_data:
redis_data:
Image Optimization
Builder Optimization
For faster builds, add a .dockerignore
file:
target/
.git/
.github/
.vscode/
.idea/
tests/
*.md
*.log
Size Optimization
For the smallest possible image, consider using Alpine Linux:
# Runtime stage
FROM alpine:3.19
RUN apk add --no-cache ca-certificates libssl1.1
WORKDIR /app
COPY --from=builder /usr/src/navius/target/release/navius /app/
COPY --from=builder /usr/src/navius/config /app/config
ENV CONFIG_PATH=/app/config
EXPOSE 8080
CMD ["./navius"]
Security Optimization
For enhanced security, run as a non-root user:
# Add a navius user and group
RUN addgroup -S navius && adduser -S navius -G navius
# Change ownership
RUN chown -R navius:navius /app
# Switch to navius user
USER navius
EXPOSE 8080
CMD ["./navius"]
Running in Production
Basic Run Command
Run your containerized Navius application:
docker run -d -p 8080:8080 --name navius-api navius:latest
Resource Constraints
Set resource limits for production deployments:
docker run -d -p 8080:8080 \
--memory=512m \
--cpus=0.5 \
--restart=unless-stopped \
--name navius-api \
navius:latest
Logging Configuration
Configure logging for production:
docker run -d -p 8080:8080 \
-e RUST_LOG=info \
--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
--name navius-api \
navius:latest
Health Checks
Use Docker health checks to monitor application health:
docker run -d -p 8080:8080 \
--health-cmd "curl -f http://localhost:8080/actuator/health || exit 1" \
--health-interval=30s \
--health-timeout=10s \
--health-retries=3 \
--health-start-period=10s \
--name navius-api \
navius:latest
Docker Registry Integration
Pushing to a Registry
Push your image to a Docker registry:
# Tag the image for your registry
docker tag navius:latest registry.example.com/navius:latest
# Push to registry
docker push registry.example.com/navius:latest
Using Private Registries
For private registries, first log in:
docker login registry.example.com -u username -p password
CI/CD Integration
GitHub Actions Example
Here's a GitHub Actions workflow for building and publishing your Docker image:
name: Build and Publish Docker Image
on:
push:
branches: [ main ]
tags: [ 'v*' ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: yourusername/navius
tags: |
type=semver,pattern={{version}}
type=ref,event=branch
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
Monitoring
Prometheus Integration
Navius provides Prometheus metrics. Enable with:
docker run -d -p 8080:8080 -p 9090:9090 \
-e ENABLE_METRICS=true \
-e METRICS_PORT=9090 \
navius:latest
Container Monitoring
To monitor the container itself, consider:
- Docker stats:
docker stats navius-api
- cAdvisor: A container monitoring tool
- Prometheus Node Exporter with Docker metrics enabled
Best Practices
- Use Multi-Stage Builds to keep images small
- Run as a Non-Root User for security
- Implement Health Checks for reliability
- Pin Dependency Versions (e.g.,
FROM rust:1.72-slim
instead ofFROM rust:latest
) - Keep Images Small by removing build tools and unused files
- Use Docker Compose for local development with dependencies
- Secure Sensitive Data using Docker secrets or environment variables
- Tag Images Properly for version control (
latest
plus version tags) - Scan Images for Vulnerabilities using tools like Trivy or Clair
- Set Resource Limits to prevent container resource exhaustion
Troubleshooting
Common Issues
-
Image Too Large:
- Use multi-stage builds
- Minimize layers
- Use smaller base images like Alpine
-
Slow Build Times:
- Use Docker BuildKit (
DOCKER_BUILDKIT=1 docker build ...
) - Optimize
.dockerignore
- Use build caching effectively
- Use Docker BuildKit (
-
Container Won't Start:
- Check logs:
docker logs navius-api
- Verify environment variables
- Ensure proper permissions on mounted volumes
- Check logs:
-
Permission Issues:
- Ensure correct ownership of files
- Check volume mount permissions
- Verify the user running the container has necessary permissions
Debugging Commands
# Check container logs
docker logs navius-api
# Inspect container details
docker inspect navius-api
# Execute a command inside the container
docker exec -it navius-api /bin/sh
# Check resource usage
docker stats navius-api
Related Resources
- Production Deployment Guide - General production deployment guidelines
- Kubernetes Deployment Guide - Deploying with Kubernetes
- Environment Variables Reference - Configuration options
- Security Guide - Security considerations
title: "Aws Deployment" description: "" category: "Documentation" tags: [] last_updated: "March 28, 2025" version: "1.0"
AWS Deployment
title: "Kubernetes Deployment Guide for Navius" description: "Comprehensive guide for deploying and managing Navius applications in Kubernetes environments with best practices for scalability, resource management, and observability" category: "guides" tags:
- kubernetes
- deployment
- k8s
- containers
- orchestration
- cloud-native related:
- production-deployment.md
- cloud-deployment.md
- ../../05_reference/configuration/environment-variables.md
- ../operations/monitoring.md last_updated: "April 1, 2025" version: "1.0"
Kubernetes Deployment Guide for Navius
Overview
This guide provides detailed instructions for deploying Navius applications to Kubernetes clusters. Navius is well-suited for Kubernetes deployments due to its lightweight nature, small memory footprint, and fast startup times.
Prerequisites
Before deploying Navius to Kubernetes, ensure you have:
- A functioning Kubernetes cluster (v1.20+)
- kubectl CLI configured to access your cluster
- Docker registry access for storing container images
- Basic understanding of Kubernetes concepts (Deployments, Services, ConfigMaps)
- A containerized Navius application (see the Docker Deployment Guide)
Deployment Manifest
Basic Deployment
Create a file named navius-deployment.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: navius-api
labels:
app: navius-api
spec:
replicas: 3
selector:
matchLabels:
app: navius-api
template:
metadata:
labels:
app: navius-api
spec:
containers:
- name: navius-api
image: your-registry/navius:latest
ports:
- containerPort: 8080
env:
- name: RUST_LOG
value: "info"
- name: RUN_ENV
value: "production"
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
cpu: "0.1"
memory: "128Mi"
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
Service Definition
Create a file named navius-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: navius-api
spec:
selector:
app: navius-api
ports:
- port: 80
targetPort: 8080
type: ClusterIP
Ingress Configuration
For external access, create navius-ingress.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: navius-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: navius.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: navius-api
port:
number: 80
tls:
- hosts:
- navius.example.com
secretName: navius-tls-secret
Configuration Management
ConfigMap for Application Settings
Create a configuration map for Navius settings:
apiVersion: v1
kind: ConfigMap
metadata:
name: navius-config
data:
config.yaml: |
server:
host: "0.0.0.0"
port: 8080
logging:
level: "info"
format: "json"
cache:
enabled: true
redis_url: "redis://redis-service:6379"
Update your deployment to mount this ConfigMap:
spec:
containers:
- name: navius-api
# ... other settings ...
volumeMounts:
- name: config-volume
mountPath: /etc/navius/config
volumes:
- name: config-volume
configMap:
name: navius-config
Secrets Management
For sensitive information like database credentials:
apiVersion: v1
kind: Secret
metadata:
name: navius-secrets
type: Opaque
data:
database_url: cG9zdGdyZXM6Ly91c2VyOnBhc3NAZGItc2VydmljZTo1NDMyL25hdml1cw== # Base64 encoded
jwt_secret: c2VjcmV0X2tleV9jaGFuZ2VfbWVfaW5fcHJvZHVjdGlvbg== # Base64 encoded
Reference these secrets in your deployment:
spec:
containers:
- name: navius-api
# ... other settings ...
env:
# ... other env vars ...
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: navius-secrets
key: database_url
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: navius-secrets
key: jwt_secret
Scaling Configuration
Horizontal Pod Autoscaler
Create an HPA for automatic scaling:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: navius-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: navius-api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Resource Optimization
Navius applications are lightweight and can be optimized for Kubernetes:
Resource Limits and Requests
resources:
limits:
cpu: "0.5" # Maximum CPU usage
memory: "256Mi" # Maximum memory usage
requests:
cpu: "0.1" # Initial CPU reservation
memory: "128Mi" # Initial memory reservation
These values are conservative and can be adjusted based on your workload.
Health Checks and Readiness
Navius provides built-in health endpoints that work well with Kubernetes:
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 2
failureThreshold: 3
Deployment Process
Deploy to Kubernetes
Apply the manifests in this order:
# Create ConfigMap and Secret first
kubectl apply -f navius-config.yaml
kubectl apply -f navius-secrets.yaml
# Deploy the application
kubectl apply -f navius-deployment.yaml
# Create the service
kubectl apply -f navius-service.yaml
# Configure ingress
kubectl apply -f navius-ingress.yaml
# Set up autoscaling
kubectl apply -f navius-hpa.yaml
Verify Deployment
Check deployment status:
kubectl get deployments
kubectl get pods
kubectl get services
Test the service:
# Port-forward for local testing
kubectl port-forward svc/navius-api 8080:80
# Then access in your browser: http://localhost:8080/actuator/health
Advanced Configuration
Affinity and Anti-Affinity Rules
For better pod distribution:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- navius-api
topologyKey: "kubernetes.io/hostname"
Pod Disruption Budget
To ensure high availability during maintenance:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: navius-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: navius-api
Monitoring and Observability
Prometheus Integration
Navius exports Prometheus metrics. Create a ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: navius-metrics
labels:
release: prometheus
spec:
selector:
matchLabels:
app: navius-api
endpoints:
- port: http
path: /actuator/prometheus
Grafana Dashboards
The Navius Grafana dashboards can be imported to visualize metrics:
- Dashboard for general application health (ID: 12345)
- Dashboard for API endpoint metrics (ID: 12346)
- Dashboard for service dependencies (ID: 12347)
Troubleshooting
Common Issues
-
Pod fails to start:
- Check logs:
kubectl logs <pod-name>
- Verify resource limits:
kubectl describe pod <pod-name>
- Check logs:
-
Service unreachable:
- Verify endpoints:
kubectl get endpoints navius-api
- Check service:
kubectl describe service navius-api
- Verify endpoints:
-
Configuration issues:
- Validate ConfigMap:
kubectl describe configmap navius-config
- Check environment variables in running pod
- Validate ConfigMap:
Debugging Tools
# Shell into a pod for debugging
kubectl exec -it <pod-name> -- /bin/sh
# Check application logs
kubectl logs <pod-name> -f
# Check events
kubectl get events
Best Practices
- Use namespaces to isolate different environments (dev, staging, prod)
- Configure resource limits and requests properly to avoid resource contention
- Implement proper health checks using Navius's built-in health endpoints
- Use GitOps for managing Kubernetes manifests
- Set up proper monitoring with Prometheus and Grafana
- Use a CI/CD pipeline for automated deployments
- Implement secrets management using Kubernetes Secrets or external solutions
- Enable network policies for additional security
Related Resources
- Production Deployment Guide - General production deployment guidelines
- Cloud Deployment Guide - Cloud-specific deployment options
- Environment Variables Reference - Configuration options
- Monitoring Guide - Setting up monitoring for Navius
title: Reference Documentation description: "Technical reference documentation for Navius, including API specifications, architectural patterns, coding standards, and configuration guides" category: reference tags:
- reference
- api
- architecture
- standards
- patterns
- configuration
- security related:
- ../guides/README.md
- ../guides/development/README.md
- ../guides/features/README.md last_updated: March 27, 2025 version: 1.0
Navius Reference Documentation
This section provides detailed technical reference documentation for the Navius framework. It serves as the authoritative source for API specifications, architectural patterns, coding standards, and configuration options.
Quick Reference
- API Documentation - Complete API reference
- Architecture Guide - Core architectural principles
- Coding Standards - Development standards and conventions
- Configuration Reference - Configuration options and environment variables
Documentation Sections
API Reference
Comprehensive documentation for Navius APIs and integrations.
- API Resource Pattern - Core API resource abstraction
- API Resource Guide - Implementing API resources
- API Resource Reference - API resource specifications
- Authentication API - Authentication endpoints
- Database API - Database operations
- Two-Tier Cache API - Caching system API reference
Architecture
Core architectural concepts and patterns.
- Architectural Principles - Core design principles
- Project Structure - Recommended project organization
- Directory Organization - Directory structure guide
- Component Architecture - Component design patterns
Patterns and Best Practices
Common patterns and recommended approaches.
- API Resource Pattern - API abstraction pattern
- Import Patterns - Module import guidelines
- Error Handling Patterns - Error management
- Testing Patterns - Testing strategies
Standards and Conventions
Development standards and coding conventions.
- Naming Conventions - Naming guidelines
- Code Style - Code formatting standards
- Generated Code - Working with generated code
- Security Standards - Security requirements
- Documentation Standards - Documentation guidelines
Configuration
Configuration options and environment setup.
- Environment Variables - Environment configuration
- Application Config - Application settings
- Logging Config - Logging configuration
- Security Config - Security settings
Using the Reference Documentation
This reference documentation is organized to help you:
-
Find Information Quickly
- Use the quick reference links above
- Navigate through specific sections
- Search for specific topics
-
Understand Patterns
- Review architectural principles
- Learn common patterns
- Follow best practices
-
Implement Features
- Reference API specifications
- Follow coding standards
- Configure components properly
-
Maintain Code Quality
- Apply coding standards
- Follow security guidelines
- Use recommended patterns
Related Resources
- Development Guides - Development workflow and practices
- Feature Guides - Feature implementation guides
- Deployment Guides - Deployment instructions
- Getting Started - Quick start guides
Contributing to Reference Documentation
When contributing to the reference documentation:
- Follow the Documentation Standards
- Include clear code examples
- Keep information up-to-date
- Cross-reference related documents
Need Help?
If you need help understanding the reference documentation:
- Check the examples in each document
- Review related guides in the Guides section
- Join our Discord Community for assistance
- Open an issue on our GitHub repository
title: API Reference description: Detailed documentation of Navius API resources and patterns category: reference tags:
- api
- resources
- patterns related:
- ../README.md
- ../../04_guides/features/api-integration.md last_updated: March 31, 2025 version: 1.0
API Reference
This section contains detailed technical documentation for the Navius API. It includes comprehensive information about API resources, patterns, and implementations.
Document List
- API Resources - Complete reference for all API resources
- API Patterns - Detailed explanation of API patterns used in Navius
Key Documents
For API development, the following documents are essential:
API Overview
The Navius API follows RESTful principles and provides:
- Resource-based endpoints
- Consistent request/response formats
- Comprehensive error handling
- Authentication and authorization controls
- Versioning support
Use this reference documentation when you need detailed specifications for API endpoints, data structures, or implementation patterns.
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
API Resource Abstraction
This document explains the API resource abstraction pattern used in our project, which provides a unified way to handle API resources with built-in reliability features.
Overview
The API resource abstraction provides a clean, consistent pattern for handling external API interactions with the following features:
- Automatic caching: Resources are cached to reduce latency and external API calls
- Retry mechanism: Failed API calls are retried with exponential backoff
- Consistent error handling: All API errors are handled in a consistent way
- Standardized logging: API interactions are logged with consistent format
- Type safety: Strong typing ensures correctness at compile time
Core Components
The abstraction consists of the following components:
- ApiResource trait: Interface that resources must implement
- ApiHandlerOptions: Configuration options for handlers
- create_api_handler: Factory function to create Axum handlers with reliability features
- Support functions: Caching and retry helpers
Using the Pattern
1. Implementing ApiResource for your model
#![allow(unused)] fn main() { use crate::utils::api_resource::ApiResource; // Your model structure #[derive(Debug, Clone, Serialize, Deserialize)] struct User { id: i64, name: String, email: String, } // Implement ApiResource for your model impl ApiResource for User { type Id = i64; // The type of the ID field fn resource_type() -> &'static str { "user" // Used for caching and logging } fn api_name() -> &'static str { "UserService" // Used for logging } } }
2. Creating a Fetch Function
#![allow(unused)] fn main() { async fn fetch_user(state: &Arc<AppState>, id: i64) -> Result<User> { let url = format!("{}/users/{}", state.config.user_service_url, id); // Create a closure that returns the actual request future let fetch_fn = || async { state.client.get(&url).send().await }; // Make the API call using the common logger/handler api_logger::api_call("UserService", &url, fetch_fn, "User", id).await } }
3. Creating an API Handler
#![allow(unused)] fn main() { pub async fn get_user_handler( State(state): State<Arc<AppState>>, Path(id): Path<String>, ) -> Result<Json<User>> { // Define the fetch function inline to avoid lifetime issues let fetch_fn = move |state: &Arc<AppState>, id: i64| -> futures::future::BoxFuture<'static, Result<User>> { let state = state.clone(); // Clone the state to avoid lifetime issues Box::pin(async move { // Your actual API call logic here // ... }) }; // Create an API handler with reliability features let handler = create_api_handler( fetch_fn, ApiHandlerOptions { use_cache: true, use_retries: true, max_retry_attempts: 3, cache_ttl_seconds: 300, detailed_logging: true, }, ); // Execute the handler handler(State(state), Path(id)).await } }
Configuration Options
The ApiHandlerOptions
struct provides the following configuration options:
#![allow(unused)] fn main() { struct ApiHandlerOptions { use_cache: bool, // Whether to use caching use_retries: bool, // Whether to retry failed requests max_retry_attempts: u32, // Maximum number of retry attempts (default: 3) cache_ttl_seconds: u64, // Cache time-to-live in seconds (default: 300) detailed_logging: bool, // Whether to log detailed information (default: true) } }
Best Practices
- Keep fetch functions simple: They should focus on the API call logic
- Use consistent naming: Name conventions help with maintenance
- Add appropriate logging: Additional context helps with debugging
- Handle errors gracefully: Return appropriate error codes to clients
- Test thoroughly: Verify behavior with unit tests for each handler
Example Use Cases
Basic Handler with Default Options
#![allow(unused)] fn main() { pub async fn get_product_handler( State(state): State<Arc<AppState>>, Path(id): Path<String>, ) -> Result<Json<Product>> { create_api_handler( fetch_product, ApiHandlerOptions { use_cache: true, use_retries: true, max_retry_attempts: 3, cache_ttl_seconds: 300, detailed_logging: true, }, )(State(state), Path(id)).await } }
Custom Handler with Specific Options
#![allow(unused)] fn main() { pub async fn get_weather_handler( State(state): State<Arc<AppState>>, Path(location): Path<String>, ) -> Result<Json<Weather>> { create_api_handler( fetch_weather, ApiHandlerOptions { use_cache: true, // Weather data can be cached use_retries: false, // Weather requests shouldn't retry max_retry_attempts: 1, cache_ttl_seconds: 60, // Weather data changes frequently detailed_logging: false, // High volume endpoint, reduce logging }, )(State(state), Path(location)).await } }
Troubleshooting
Cache Not Working
If caching isn't working as expected:
- Verify the
use_cache
option is set totrue
- Ensure the
ApiResource
implementation is correct - Check if the cache is enabled in the application state
Retries Not Working
If retries aren't working as expected:
- Verify the
use_retries
option is set totrue
- Check the error type (only service errors are retried)
- Inspect the logs for retry attempts
Extending the Abstraction
This section explains how to extend the API resource abstraction to support new resource types beyond the existing ones.
Current Limitations
The current implementation has specialized type conversions for certain resource types, but it's designed to be extended.
Adding Support for a New Resource Type
1. Identify Your Resource Type
For example, let's say you want to add support for a new Product
type:
#![allow(unused)] fn main() { #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Product { id: i64, name: String, price: f64, } }
2. Implement the ApiResource Trait
#![allow(unused)] fn main() { impl ApiResource for Product { type Id = i64; fn resource_type() -> &'static str { "product" } fn api_name() -> &'static str { "ProductAPI" } } }
3. Update the Type Conversions
Modify the type conversion functions in src/utils/api_resource/core.rs
:
#![allow(unused)] fn main() { fn convert_cached_resource<R: ApiResource>(cached: impl Any) -> Option<R> { // existing code for other types... // Handle Product resources else if type_id == std::any::TypeId::of::<Product>() { if let Some(product) = cached.downcast_ref::<Product>() { let boxed: Box<dyn Any> = Box::new(product.clone()); let resource_any: Box<dyn Any> = boxed; if let Ok(typed) = resource_any.downcast::<R>() { return Some(*typed); } } } None } }
4. Update the Cache Type if Needed
Depending on your needs, you may need to update the cache structure to handle multiple resource types.
Future Enhancements
Planned enhancements to the pattern include:
- Generic cache implementation that can work with any resource type
- Circuit breaker pattern for automatically handling failing services
- Integration with distributed tracing
- Dynamic configuration of retry and caching policies
Related Documents
- API Standards - API design guidelines
- Error Handling - Error handling patterns
title: "Authentication API Reference" description: "Reference documentation for Navius Authentication API endpoints, request/response formats, and integration patterns" category: "Reference" tags: ["api", "authentication", "security", "jwt", "oauth", "endpoints"] last_updated: "April 9, 2025" version: "1.0"
Authentication API Reference
Overview
The Navius Authentication API provides endpoints for user authentication, token management, and session control. It supports multiple authentication methods including JWT-based authentication, OAuth2 with Microsoft Entra, and API key authentication.
This reference document details all endpoints, data structures, and integration patterns for implementing authentication in Navius applications.
Authentication Methods
Navius supports the following authentication methods:
Method | Use Case | Security Level | Configuration |
---|---|---|---|
JWT | General purpose authentication | High | Required JWT secret |
Microsoft Entra | Enterprise authentication | Very High | Requires tenant configuration |
API Keys | Service-to-service | Medium | Requires key management |
Session Cookies | Web applications | Medium-High | Requires session configuration |
API Endpoints
User Authentication
POST /auth/login
Authenticates a user and returns a JWT token.
Request Body:
{
"username": "[email protected]",
"password": "secure_password"
}
Response (200 OK):
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"refreshToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"expiresIn": 3600,
"tokenType": "Bearer"
}
Error Responses:
401 Unauthorized
: Invalid credentials403 Forbidden
: Account locked or disabled429 Too Many Requests
: Rate limit exceeded
Curl Example:
curl -X POST http://localhost:3000/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"[email protected]","password":"secure_password"}'
Code Example:
#![allow(unused)] fn main() { // Client-side authentication request async fn login(client: &Client, username: &str, password: &str) -> Result<TokenResponse> { let response = client .post("http://localhost:3000/auth/login") .json(&json!({ "username": username, "password": password })) .send() .await?; if response.status().is_success() { Ok(response.json::<TokenResponse>().await?) } else { Err(format!("Authentication failed: {}", response.status()).into()) } } }
POST /auth/refresh
Refreshes an expired JWT token using a refresh token.
Request Body:
{
"refreshToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
}
Response (200 OK):
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"refreshToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"expiresIn": 3600,
"tokenType": "Bearer"
}
Error Responses:
401 Unauthorized
: Invalid refresh token403 Forbidden
: Refresh token revoked
POST /auth/logout
Invalidates the current session or token.
Request Headers:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Response (200 OK):
{
"message": "Successfully logged out"
}
Microsoft Entra Integration
GET /auth/entra/login
Initiates Microsoft Entra authentication flow.
Query Parameters:
redirect_uri
(required): URL to redirect after authenticationstate
(optional): State parameter for CSRF protection
Response:
Redirects to Microsoft Entra login page.
GET /auth/entra/callback
Callback endpoint for Microsoft Entra authentication.
Query Parameters:
code
(required): Authorization code from Microsoft Entrastate
(optional): State parameter for CSRF protection
Response:
Redirects to the original redirect_uri
with token information.
API Key Management
POST /auth/apikeys
Creates a new API key for service-to-service authentication.
Request Headers:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Request Body:
{
"name": "Service Integration Key",
"permissions": ["read:users", "write:data"],
"expiresIn": 2592000 // 30 days in seconds
}
Response (201 Created):
{
"id": "api_123456789",
"key": "sk_live_abcdefghijklmnopqrstuvwxyz123456789",
"name": "Service Integration Key",
"permissions": ["read:users", "write:data"],
"createdAt": "2025-04-09T10:15:30Z",
"expiresAt": "2025-05-09T10:15:30Z"
}
Data Models
TokenResponse
#![allow(unused)] fn main() { /// Response containing authentication tokens struct TokenResponse { /// JWT access token token: String, /// JWT refresh token refresh_token: String, /// Token validity in seconds expires_in: u64, /// Token type (always "Bearer") token_type: String, } }
ApiKey
#![allow(unused)] fn main() { /// API key for service-to-service authentication struct ApiKey { /// Unique identifier id: String, /// Secret key (only returned on creation) key: Option<String>, /// Display name name: String, /// List of permissions permissions: Vec<String>, /// Creation timestamp created_at: DateTime<Utc>, /// Expiration timestamp expires_at: DateTime<Utc>, } }
Integration Patterns
JWT Authentication Flow
- Client calls
/auth/login
with credentials - Server validates credentials and returns JWT + refresh token
- Client stores tokens and includes JWT in subsequent requests
- When JWT expires, client uses refresh token to get a new JWT
- For logout, client calls
/auth/logout
and discards tokens
#![allow(unused)] fn main() { // Server-side handler for protected routes async fn protected_route( auth: AuthExtractor, // other parameters ) -> Result<impl IntoResponse> { // Auth middleware extracts and validates JWT automatically // If JWT is invalid, route is not reached // Access authenticated user let user_id = auth.user_id; // Access user permissions if !auth.has_permission("read:resource") { return Err(AppError::forbidden("Insufficient permissions")); } // Handle the actual request Ok(Json(/* response data */)) } }
Microsoft Entra (OAuth2) Flow
- Redirect user to
/auth/entra/login
with appropriate parameters - User authenticates with Microsoft
- Microsoft redirects to
/auth/entra/callback
- Server validates the code and issues JWT tokens
- Application uses the JWT tokens as in the standard JWT flow
#![allow(unused)] fn main() { // Client-side Entra redirect fn redirect_to_entra_login(redirect_uri: &str) -> String { format!( "/auth/entra/login?redirect_uri={}&state={}", urlencoding::encode(redirect_uri), generate_random_state() ) } }
API Key Authentication
- Administrator creates API key via
/auth/apikeys
- Service includes API key in requests via header
- Server validates API key and permissions
- For security, rotate keys periodically
#![allow(unused)] fn main() { // Including API key in request header let response = client .get("http://api.example.com/resource") .header("X-API-Key", "sk_live_abcdefghijklmnopqrstuvwxyz123456789") .send() .await?; }
Configuration
Authentication can be configured in config/auth.yaml
:
auth:
jwt:
secret: "${JWT_SECRET}"
expiration: 3600 # 1 hour
refresh_expiration: 2592000 # 30 days
entra:
tenant_id: "${ENTRA_TENANT_ID}"
client_id: "${ENTRA_CLIENT_ID}"
client_secret: "${ENTRA_CLIENT_SECRET}"
redirect_uri: "http://localhost:3000/auth/entra/callback"
api_keys:
enabled: true
max_per_user: 10
default_expiration: 2592000 # 30 days
Best Practices
Security Recommendations
- Use HTTPS: Always use HTTPS for all authentication endpoints
- Token Storage: Store tokens securely (use HttpOnly cookies for web apps)
- Short Expiration: Keep JWT tokens short-lived (1 hour or less)
- Refresh Token Rotation: Issue new refresh tokens with each refresh
- API Key Handling: Treat API keys as sensitive credentials
- Permission Validation: Always validate permissions, not just authentication
Common Pitfalls
- JWT in LocalStorage: Avoid storing JWTs in localStorage (vulnerable to XSS)
- Missing CSRF Protection: Always use state parameter with OAuth flows
- Hardcoded Secrets: Never hardcode secrets in client-side code
- Skipping Validation: Always validate JWTs, even in internal services
- Weak Tokens: Ensure proper entropy in tokens and use proper algorithms
Troubleshooting
Common Issues
- "Invalid token" errors: Check token expiration and signature algorithm
- CORS errors: Ensure authentication endpoints have proper CORS configuration
- Refresh token not working: Verify refresh token hasn't expired or been revoked
- Rate limiting: Check if you're hitting rate limits on authentication endpoints
Debugging
Enable detailed authentication logs by setting:
RUST_LOG=navius::auth=debug
This will show detailed information about token validation, including reasons for rejection.
Related Resources
- Authentication Implementation Guide
- Microsoft Entra Integration Guide
- API Security Guide
- JWT Standard
title: "Application API Reference" description: "Comprehensive reference guide for the Navius Application API, including core components, lifecycle management, routing, middleware, error handling, and state management" category: api tags:
- api
- application
- framework
- routing
- middleware
- error-handling
- state-management
- lifecycle related:
- router-api.md
- config-api.md
- ../../02_examples/basic-application-example.md
- ../../02_examples/custom-service-example.md last_updated: March 31, 2025 version: 1.1 status: stable
Application API Reference
Overview
The Application API in Navius provides the core interfaces and structures for building applications. It offers a standardized approach to application lifecycle, dependency injection, routing, middleware, and error handling.
Core Components
Application
The Application
struct represents a Navius application:
#![allow(unused)] fn main() { pub struct Application { name: String, router: Router, service_registry: Arc<ServiceRegistry>, config: Arc<dyn ConfigService>, state: AppState, } impl Application { pub fn name(&self) -> &str { &self.name } pub fn config(&self) -> &Arc<dyn ConfigService> { &self.config } pub fn router(&self) -> &Router { &self.router } pub fn service_registry(&self) -> &Arc<ServiceRegistry> { &self.service_registry } pub fn state(&self) -> &AppState { &self.state } pub fn state_mut(&mut self) -> &mut AppState { &mut self.state } } }
Application Builder
The ApplicationBuilder
provides a fluent API for configuring and building a Navius application:
#![allow(unused)] fn main() { pub struct ApplicationBuilder { name: String, router_config: RouterConfig, service_registry: Arc<ServiceRegistry>, config: Option<Arc<dyn ConfigService>>, state: AppState, startup_hooks: Vec<Box<dyn ApplicationHook>>, shutdown_hooks: Vec<Box<dyn ApplicationHook>>, } impl ApplicationBuilder { pub fn new(name: impl Into<String>) -> Self { // Initialize with defaults } pub fn with_config(mut self, config: Arc<dyn ConfigService>) -> Self { self.config = Some(config); self } pub fn with_router_config(mut self, config: RouterConfig) -> Self { self.router_config = config; self } pub fn with_service<S: 'static + Send + Sync>(mut self, service: S) -> Result<Self, AppError> { self.service_registry.register(service)?; Ok(self) } pub fn with_startup_hook(mut self, hook: Box<dyn ApplicationHook>) -> Self { self.startup_hooks.push(hook); self } pub fn with_shutdown_hook(mut self, hook: Box<dyn ApplicationHook>) -> Self { self.shutdown_hooks.push(hook); self } pub fn build(self) -> Result<Application, AppError> { // Build the application with all configured components } } }
Application Hook
The ApplicationHook
trait defines hooks that are called during the application lifecycle:
#![allow(unused)] fn main() { pub trait ApplicationHook: Send + Sync { fn execute(&self, app: &mut Application) -> Result<(), AppError>; } }
AppState
The AppState
struct holds application-wide state:
#![allow(unused)] fn main() { pub struct AppState { values: RwLock<HashMap<TypeId, Box<dyn Any + Send + Sync>>>, } impl AppState { pub fn new() -> Self { Self { values: RwLock::new(HashMap::new()), } } pub fn insert<T: 'static + Send + Sync>(&self, value: T) -> Result<(), AppError> { let mut values = self.values.write().map_err(|_| { AppError::internal_server_error("Failed to acquire write lock on app state") })?; let type_id = TypeId::of::<T>(); values.insert(type_id, Box::new(value)); Ok(()) } pub fn get<T: 'static + Clone + Send + Sync>(&self) -> Result<T, AppError> { let values = self.values.read().map_err(|_| { AppError::internal_server_error("Failed to acquire read lock on app state") })?; let type_id = TypeId::of::<T>(); match values.get(&type_id) { Some(value) => { if let Some(value_ref) = value.downcast_ref::<T>() { Ok(value_ref.clone()) } else { Err(AppError::internal_server_error( format!("Value of type {:?} exists but could not be downcast", type_id) )) } }, None => Err(AppError::not_found( format!("No value of type {:?} found in app state", type_id) )), } } } }
Application Lifecycle
Building an Application
#![allow(unused)] fn main() { // Create an application builder let builder = ApplicationBuilder::new("my-app") .with_config(config_service.clone())? .with_service(database_service.clone())? .with_service(cache_service.clone())? .with_startup_hook(Box::new(DatabaseMigrationHook::new()))? .with_shutdown_hook(Box::new(ResourceCleanupHook::new()))?; // Build the application let app = builder.build()?; }
Starting an Application
#![allow(unused)] fn main() { // Create and build an application let app = ApplicationBuilder::new("my-app") // Configure the application .build()?; // Run the application app.run().await?; }
Graceful Shutdown
#![allow(unused)] fn main() { // Create a shutdown signal handler let shutdown_signal = async { match signal::ctrl_c().await { Ok(()) => { log::info!("Shutdown signal received, starting graceful shutdown"); }, Err(err) => { log::error!("Error listening for shutdown signal: {}", err); } } }; // Run the application with shutdown signal app.run_until(shutdown_signal).await?; }
Routing
Defining Routes
#![allow(unused)] fn main() { // Create a router let router = Router::new() .route("/", get(index_handler)) .route("/users", get(get_users).post(create_user)) .route("/users/:id", get(get_user_by_id)) .nest("/api/v1", api_v1_router()) .layer(middleware::from_fn(auth_middleware)); // Create an application with the router let app = ApplicationBuilder::new("my-app") .with_router(router) .build()?; }
Route Groups
#![allow(unused)] fn main() { // Create a grouped router let api_router = Router::new() .route("/users", get(get_users).post(create_user)) .route("/users/:id", get(get_user_by_id).put(update_user).delete(delete_user)) .route("/health", get(health_check)); // Add authentication middleware to the group let authenticated_api = api_router.layer(middleware::from_fn(auth_middleware)); // Add the grouped router to the main router let router = Router::new() .nest("/api/v1", authenticated_api) .route("/", get(index_handler)); }
Route Parameters
#![allow(unused)] fn main() { // Handler function with route parameter async fn get_user_by_id( Path(id): Path<String>, State(registry): State<Arc<ServiceRegistry>>, ) -> Result<Json<User>, AppError> { let user_service = registry.get::<UserService>()?; let user = user_service.get_user(&id).await?; Ok(Json(user)) } }
Query Parameters
#![allow(unused)] fn main() { #[derive(Debug, Deserialize)] struct UserQuery { limit: Option<usize>, offset: Option<usize>, sort_by: Option<String>, } async fn get_users( Query(query): Query<UserQuery>, State(registry): State<Arc<ServiceRegistry>>, ) -> Result<Json<Vec<User>>, AppError> { let user_service = registry.get::<UserService>()?; let limit = query.limit.unwrap_or(10); let offset = query.offset.unwrap_or(0); let sort_by = query.sort_by.unwrap_or_else(|| "created_at".to_string()); let users = user_service.list_users(limit, offset, &sort_by).await?; Ok(Json(users)) } }
Request Handling
Request Extractors
Navius provides several extractors for handling requests:
Extractor | Description | Example |
---|---|---|
State<T> | Extract shared state | State(registry): State<Arc<ServiceRegistry>> |
Path<T> | Extract path parameters | Path(id): Path<String> |
Query<T> | Extract query parameters | Query(params): Query<Pagination> |
Json<T> | Extract JSON request body | Json(user): Json<CreateUser> |
Form<T> | Extract form data | Form(login): Form<LoginForm> |
Extension<T> | Extract request extensions | Extension(user): Extension<CurrentUser> |
Handler Function
#![allow(unused)] fn main() { async fn create_user( Json(user_data): Json<CreateUserRequest>, State(registry): State<Arc<ServiceRegistry>>, ) -> Result<(StatusCode, Json<User>), AppError> { let user_service = registry.get::<UserService>()?; // Validate input user_data.validate()?; // Create user let user = user_service.create_user(user_data.into()).await?; // Return created user with 201 status Ok((StatusCode::CREATED, Json(user))) } }
Service Access
#![allow(unused)] fn main() { async fn process_order( Json(order_data): Json<CreateOrderRequest>, State(registry): State<Arc<ServiceRegistry>>, ) -> Result<Json<Order>, AppError> { // Get services from registry let order_service = registry.get::<OrderService>()?; let payment_service = registry.get::<PaymentService>()?; let notification_service = registry.get::<NotificationService>()?; // Process the order let order = order_service.create_order(order_data).await?; // Process payment let payment = payment_service.process_payment(&order).await?; // Update order with payment information let updated_order = order_service.update_order_payment(&order.id, &payment.id).await?; // Send notification notification_service.send_order_confirmation(&updated_order).await?; Ok(Json(updated_order)) } }
Middleware
Creating Middleware
#![allow(unused)] fn main() { async fn logging_middleware( req: Request, next: Next, ) -> Result<Response, AppError> { let start = std::time::Instant::now(); let method = req.method().clone(); let uri = req.uri().clone(); let response = next.run(req).await?; let duration = start.elapsed(); log::info!("{} {} - {} - {:?}", method, uri, response.status(), duration); Ok(response) } }
Authentication Middleware
#![allow(unused)] fn main() { async fn auth_middleware( req: Request, next: Next, ) -> Result<Response, AppError> { // Extract authorization header let auth_header = req.headers() .get(header::AUTHORIZATION) .and_then(|h| h.to_str().ok()) .ok_or_else(|| AppError::unauthorized("Missing authorization header"))?; // Validate token if !auth_header.starts_with("Bearer ") { return Err(AppError::unauthorized("Invalid authorization format")); } let token = &auth_header["Bearer ".len()..]; // Get auth service from request extensions let registry = req.extensions() .get::<Arc<ServiceRegistry>>() .ok_or_else(|| AppError::internal_server_error("Service registry not found"))?; let auth_service = registry.get::<AuthService>()?; // Verify token let user = auth_service.verify_token(token).await?; // Add user to request extensions let mut req = req; req.extensions_mut().insert(user); // Continue with the request next.run(req).await } }
CORS Middleware
#![allow(unused)] fn main() { fn configure_cors() -> CorsLayer { CorsLayer::new() .allow_origin(["https://example.com".parse::<HeaderValue>().unwrap()]) .allow_methods(vec![Method::GET, Method::POST, Method::PUT, Method::DELETE]) .allow_headers(vec![header::CONTENT_TYPE, header::AUTHORIZATION]) .allow_credentials(true) .max_age(Duration::from_secs(3600)) } // Add to router let router = Router::new() .route("/users", get(get_users)) .layer(configure_cors()); }
Rate Limiting Middleware
#![allow(unused)] fn main() { fn configure_rate_limit() -> RateLimitLayer { RateLimitLayer::new( 100, // requests Duration::from_secs(60), // per minute ) } // Add to router let router = Router::new() .route("/api/v1/users", get(get_users)) .layer(configure_rate_limit()); }
Error Handling
AppError
The AppError
struct represents application errors:
#![allow(unused)] fn main() { pub struct AppError { pub status: StatusCode, pub code: String, pub message: String, pub details: Option<serde_json::Value>, pub source: Option<Box<dyn std::error::Error + Send + Sync>>, } impl AppError { pub fn new(status: StatusCode, code: impl Into<String>, message: impl Into<String>) -> Self { Self { status, code: code.into(), message: message.into(), details: None, source: None, } } pub fn with_details(mut self, details: serde_json::Value) -> Self { self.details = Some(details); self } pub fn with_source<E: std::error::Error + Send + Sync + 'static>(mut self, source: E) -> Self { self.source = Some(Box::new(source)); self } // Convenience methods for common errors pub fn bad_request(message: impl Into<String>) -> Self { Self::new(StatusCode::BAD_REQUEST, "BAD_REQUEST", message) } pub fn unauthorized(message: impl Into<String>) -> Self { Self::new(StatusCode::UNAUTHORIZED, "UNAUTHORIZED", message) } pub fn forbidden(message: impl Into<String>) -> Self { Self::new(StatusCode::FORBIDDEN, "FORBIDDEN", message) } pub fn not_found(message: impl Into<String>) -> Self { Self::new(StatusCode::NOT_FOUND, "NOT_FOUND", message) } pub fn internal_server_error(message: impl Into<String>) -> Self { Self::new(StatusCode::INTERNAL_SERVER_ERROR, "INTERNAL_SERVER_ERROR", message) } } }
Error Response Format
{
"status": 400,
"code": "VALIDATION_ERROR",
"message": "Invalid request data",
"details": {
"errors": [
{
"field": "email",
"message": "Invalid email format"
},
{
"field": "password",
"message": "Password must be at least 8 characters"
}
]
}
}
Error Handler Middleware
#![allow(unused)] fn main() { async fn error_handler(err: AppError) -> Response { let status = err.status; let body = serde_json::json!({ "status": status.as_u16(), "code": err.code, "message": err.message, "details": err.details, }); // Log the error if status.is_server_error() { log::error!("Server error: {:?}", err); if let Some(source) = &err.source { log::error!("Caused by: {:?}", source); } } else { log::debug!("Client error: {:?}", err); } // Create response let mut response = Response::new(Body::from(serde_json::to_vec(&body).unwrap_or_default())); *response.status_mut() = status; response.headers_mut().insert( header::CONTENT_TYPE, HeaderValue::from_static("application/json"), ); response } }
Validation Error Handling
#![allow(unused)] fn main() { #[derive(Debug, Deserialize, Validate)] struct CreateUserRequest { #[validate(email)] email: String, #[validate(length(min = 8))] password: String, #[validate(length(min = 1, max = 100))] name: String, } impl CreateUserRequest { fn validate(&self) -> Result<(), AppError> { match <Self as Validate>::validate(self) { Ok(_) => Ok(()), Err(validation_errors) => { let mut errors = Vec::new(); for (field, field_errors) in validation_errors.field_errors() { for error in field_errors { errors.push(serde_json::json!({ "field": field, "message": error.message.clone().unwrap_or_else(|| "Invalid value".into()), })); } } Err(AppError::bad_request("Validation failed") .with_details(serde_json::json!({ "errors": errors }))) } } } } }
Configuration
Application Configuration
#![allow(unused)] fn main() { #[derive(Debug, Deserialize, Clone)] pub struct AppConfig { pub name: String, pub environment: String, pub server: ServerConfig, pub logging: LoggingConfig, pub database: DatabaseConfig, } #[derive(Debug, Deserialize, Clone)] pub struct ServerConfig { pub host: String, pub port: u16, pub request_timeout: Option<u64>, } // Load configuration let config = ConfigManager::new(vec![ Box::new(FileConfigLoader::new("config")?), Box::new(EnvConfigLoader::new("APP_")?), ]) .load()?; // Create application with configuration let app = ApplicationBuilder::new("my-app") .with_config(Arc::new(config)) .build()?; }
Loading Configuration from Files
# config/default.yaml
name: "my-app"
environment: "development"
server:
host: "127.0.0.1"
port: 8080
request_timeout: 30
logging:
level: "debug"
format: "json"
database:
url: "postgres://localhost:5432/myapp"
max_connections: 10
State Management
Application State
#![allow(unused)] fn main() { // Define application state pub struct AppMetrics { pub request_count: AtomicU64, pub error_count: AtomicU64, } impl AppMetrics { pub fn new() -> Self { Self { request_count: AtomicU64::new(0), error_count: AtomicU64::new(0), } } pub fn increment_request_count(&self) { self.request_count.fetch_add(1, Ordering::SeqCst); } pub fn increment_error_count(&self) { self.error_count.fetch_add(1, Ordering::SeqCst); } pub fn get_request_count(&self) -> u64 { self.request_count.load(Ordering::SeqCst) } pub fn get_error_count(&self) -> u64 { self.error_count.load(Ordering::SeqCst) } } // Add state to application let metrics = AppMetrics::new(); let app = ApplicationBuilder::new("my-app") .with_state(metrics)? .build()?; }
Accessing State in Handlers
#![allow(unused)] fn main() { async fn metrics_handler( State(app): State<Arc<Application>>, ) -> Result<Json<serde_json::Value>, AppError> { let metrics = app.state().get::<AppMetrics>()?; let response = serde_json::json!({ "request_count": metrics.get_request_count(), "error_count": metrics.get_error_count(), }); Ok(Json(response)) } }
Request Validation
Request Validation Middleware
#![allow(unused)] fn main() { async fn validate_request<T, B>(req: Request<B>, next: Next<B>) -> Result<Response, AppError> where T: DeserializeOwned + Validate, B: Body + Send + 'static, B::Error: Into<Box<dyn std::error::Error + Send + Sync>>, B::Data: Send, { // Extract and validate the body let (parts, body) = req.into_parts(); let bytes = hyper::body::to_bytes(body).await.map_err(|err| { AppError::bad_request("Failed to read request body").with_source(err) })?; let value: T = serde_json::from_slice(&bytes).map_err(|err| { AppError::bad_request("Invalid JSON").with_source(err) })?; // Validate the request if let Err(validation_errors) = value.validate() { let mut errors = Vec::new(); for (field, field_errors) in validation_errors.field_errors() { for error in field_errors { errors.push(serde_json::json!({ "field": field, "message": error.message.clone().unwrap_or_else(|| "Invalid value".into()), })); } } return Err(AppError::bad_request("Validation failed") .with_details(serde_json::json!({ "errors": errors }))); } // Reconstruct the request let body = Body::from(bytes); let req = Request::from_parts(parts, body); // Continue with the request next.run(req).await } }
Health Checks
Health Check Handler
#![allow(unused)] fn main() { async fn health_check( State(registry): State<Arc<ServiceRegistry>>, ) -> Result<Json<serde_json::Value>, AppError> { let health_service = registry.get::<HealthService>()?; // Perform health check let health_status = health_service.check().await?; // Return health status Ok(Json(health_status)) } }
Health Service
#![allow(unused)] fn main() { pub struct HealthService { checkers: Vec<Box<dyn HealthChecker>>, } impl HealthService { pub fn new() -> Self { Self { checkers: Vec::new(), } } pub fn add_checker(&mut self, checker: Box<dyn HealthChecker>) { self.checkers.push(checker); } pub async fn check(&self) -> Result<serde_json::Value, AppError> { let mut status = "UP"; let mut components = HashMap::new(); for checker in &self.checkers { let result = checker.check().await; let component_status = match &result { Ok(_) => "UP", Err(_) => { status = "DOWN"; "DOWN" } }; let details = match result { Ok(details) => details, Err(err) => { serde_json::json!({ "error": err.to_string() }) } }; components.insert(checker.name(), serde_json::json!({ "status": component_status, "details": details })); } Ok(serde_json::json!({ "status": status, "components": components, "timestamp": chrono::Utc::now().to_rfc3339() })) } } }
Related Examples
- Basic Application Example
- Configuration Example
- Custom Service Example
- Error Handling Example
- Middleware Example
title: "Database API Reference" description: "API documentation for Navius database service and operations" category: api tags:
- api
- database
- storage
- repository related:
- ../patterns/database-service-pattern.md
- ../../02_examples/database-service-example.md
- ../patterns/repository-pattern.md last_updated: March 31, 2025 version: 1.0
Database API Reference
Overview
The Database API provides a generic interface for interacting with databases through the Database Service. While this is primarily a programmatic API rather than a REST API, this reference documents the core interfaces, operations, and usage patterns for working with the Database Service.
Core Interfaces
DatabaseOperations
The DatabaseOperations
trait defines the core operations available for all database implementations:
#![allow(unused)] fn main() { #[async_trait] pub trait DatabaseOperations: Send + Sync { /// Get a value from the database async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError>; /// Set a value in the database async fn set(&self, collection: &str, key: &str, value: &str) -> Result<(), ServiceError>; /// Delete a value from the database async fn delete(&self, collection: &str, key: &str) -> Result<bool, ServiceError>; /// Query the database with a filter async fn query(&self, collection: &str, filter: &str) -> Result<Vec<String>, ServiceError>; /// Execute a database transaction with multiple operations async fn transaction<F, T>(&self, operations: F) -> Result<T, ServiceError> where F: FnOnce(&dyn DatabaseOperations) -> Result<T, ServiceError> + Send + 'static, T: Send + 'static; } }
DatabaseProvider
The DatabaseProvider
trait enables creating database instances:
#![allow(unused)] fn main() { #[async_trait] pub trait DatabaseProvider: Send + Sync { /// The type of database this provider creates type Database: DatabaseOperations; /// Create a new database instance async fn create_database(&self, config: DatabaseConfig) -> Result<Self::Database, ServiceError>; /// Check if this provider supports the given configuration fn supports(&self, config: &DatabaseConfig) -> bool; /// Get the name of this provider fn name(&self) -> &str; } }
DatabaseService
The DatabaseService
manages database instances:
#![allow(unused)] fn main() { pub struct DatabaseService { provider_registry: Arc<RwLock<DatabaseProviderRegistry>>, default_config: DatabaseConfig, } impl DatabaseService { pub fn new(registry: DatabaseProviderRegistry) -> Self { // Implementation details... } pub fn with_default_config(mut self, config: DatabaseConfig) -> Self { // Implementation details... } pub async fn create_database(&self) -> Result<Box<dyn DatabaseOperations>, ServiceError> { // Implementation details... } } }
Using the Database API
Accessing the Database Service
The database service is accessible through the application's service registry:
#![allow(unused)] fn main() { use crate::core::services::ServiceRegistry; use crate::core::services::database_service::DatabaseService; // Get the service from service registry let db_service = service_registry.get::<DatabaseService>(); // Create a database instance let db = db_service.create_database().await?; }
Basic CRUD Operations
Creating/Updating Records
#![allow(unused)] fn main() { // Create a new user record let user = User { id: "user-123", name: "Alice", email: "[email protected]", }; // Serialize to JSON let user_json = serde_json::to_string(&user)?; // Store in database db.set("users", &user.id, &user_json).await?; }
Reading Records
#![allow(unused)] fn main() { // Get a user by ID if let Some(user_json) = db.get("users", "user-123").await? { // Deserialize from JSON let user: User = serde_json::from_str(&user_json)?; println!("Found user: {}", user.name); } }
Querying Records
#![allow(unused)] fn main() { // Query users with role=admin let admin_users = db.query("users", "role='admin'").await?; // Process results for user_json in admin_users { let user: User = serde_json::from_str(&user_json)?; println!("Admin user: {}", user.name); } }
Deleting Records
#![allow(unused)] fn main() { // Delete a user let deleted = db.delete("users", "user-123").await?; if deleted { println!("User deleted successfully"); } else { println!("User not found"); } }
Transactions
Transactions allow multiple operations to be executed atomically:
#![allow(unused)] fn main() { // Execute a transaction db.transaction(|tx| { // Create a new user tx.set("users", "user-1", r#"{"name":"Alice","balance":0}"#)?; // Create an account for the user tx.set("accounts", "account-1", r#"{"owner":"user-1","balance":100}"#)?; // Create initial transaction record tx.set("transactions", "tx-1", r#"{"account":"account-1","amount":100,"type":"deposit"}"#)?; Ok(()) }).await?; }
Using Repository Pattern
The Database API is typically used via the Repository pattern:
#![allow(unused)] fn main() { use crate::core::models::{Entity, Repository}; use crate::core::services::repository_service::GenericRepository; // Create a repository for User entities let user_repo = GenericRepository::<User>::with_service(&repository_service).await?; // Create a new user let mut user = User::new("Alice", "[email protected]"); // Save the user let saved_user = user_repo.save(&user).await?; // Find a user by ID if let Some(found_user) = user_repo.find_by_id(&user.id).await? { println!("Found user: {}", found_user.name); } // Delete a user let deleted = user_repo.delete(&user.id).await?; }
Available Database Providers
InMemoryDatabaseProvider
The in-memory database provider is useful for development and testing:
#![allow(unused)] fn main() { use crate::core::services::memory_database::InMemoryDatabaseProvider; // Create a provider let provider = InMemoryDatabaseProvider::new(); // Create a database instance let config = DatabaseConfig::default(); let db = provider.create_database(config).await?; }
PostgresDatabaseProvider
When PostgreSQL integration is enabled, the PostgreSQL provider is available:
#![allow(unused)] fn main() { use crate::core::services::postgres_database::PostgresDatabaseProvider; // Create a provider with connection string let provider = PostgresDatabaseProvider::new("postgres://user:pass@localhost/dbname"); // Create a database instance let config = DatabaseConfig::default(); let db = provider.create_database(config).await?; }
Configuration
The Database Service can be configured in config/default.yaml
:
# Database configuration
database:
# Default provider to use
provider: memory
# Provider-specific configurations
providers:
memory:
enabled: true
postgres:
enabled: true
connection_string: "postgres://user:pass@localhost/dbname"
max_connections: 10
connection_timeout_ms: 5000
idle_timeout_ms: 300000
# Common settings
common:
query_timeout_ms: 3000
log_queries: true
Error Handling
The Database API uses ServiceError
for error handling:
#![allow(unused)] fn main() { // Example error handling match db.get("users", "user-123").await { Ok(Some(user_json)) => { // Process user }, Ok(None) => { // User not found println!("User not found"); }, Err(e) => { match e { ServiceError::DatabaseError { message, .. } => { // Handle database error println!("Database error: {}", message); }, ServiceError::NotFound { message } => { // Handle not found error println!("Not found: {}", message); }, _ => { // Handle other errors println!("Error: {}", e); } } } } }
Implementing a Custom Provider
You can implement your own database provider by implementing the DatabaseProvider
trait:
#![allow(unused)] fn main() { use crate::core::services::database_service::{ DatabaseOperations, DatabaseProvider, DatabaseConfig }; use crate::core::services::error::ServiceError; use async_trait::async_trait; // Custom database implementation pub struct CustomDatabase { // Implementation details... } #[async_trait] impl DatabaseOperations for CustomDatabase { async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError> { // Implementation... } // Other methods... } // Custom provider pub struct CustomDatabaseProvider { // Provider details... } #[async_trait] impl DatabaseProvider for CustomDatabaseProvider { type Database = CustomDatabase; async fn create_database(&self, config: DatabaseConfig) -> Result<Self::Database, ServiceError> { // Implementation... } fn supports(&self, config: &DatabaseConfig) -> bool { config.provider_type == "custom" } fn name(&self) -> &str { "custom" } } }
Register your custom provider:
#![allow(unused)] fn main() { let mut registry = DatabaseProviderRegistry::new(); registry.register("custom", CustomDatabaseProvider::new()); let service = DatabaseService::new(registry); }
Best Practices
Collection Naming
- Use lowercase, plural nouns for collection names (e.g.,
users
,accounts
) - Use dashes instead of spaces or underscores (e.g.,
order-items
) - Keep collection names consistent across the application
Key Generation
- Use UUIDs or other globally unique identifiers for keys
- Consider using prefixed keys for better organization (e.g.,
user-123
) - Be consistent with key formats within each collection
JSON Serialization
- Use serde for JSON serialization/deserialization
- Define clear schema for each collection's documents
- Include version information in documents for schema evolution
- Consider using compression for large documents
Query Patterns
- Keep queries simple and specific to indexes
- Use appropriate filters to minimize data transfer
- Consider pagination for large result sets
- Use transactions for operations that must be atomic
Error Handling
- Handle both expected errors (e.g., not found) and unexpected errors
- Provide appropriate context in error messages
- Consider retrying transient errors (e.g., connection issues)
- Don't expose internal database errors to users
Performance Considerations
- Use connection pooling for database connections
- Cache frequently accessed data
- Use batch operations for multiple records
- Consider data access patterns when designing schemas
- Use appropriate indexes for frequent queries
- Monitor query performance and optimize as needed
Related Documentation
title: "Cache API Reference" description: "API documentation for Navius caching service and operations" category: api tags:
- api
- cache
- performance
- memory
- redis related:
- ../patterns/cache-provider-pattern.md
- ../../02_examples/cache-provider-example.md
- ../../02_examples/two-tier-cache-example.md last_updated: March 31, 2025 version: 1.0
Cache API Reference
Overview
The Cache API provides a generic interface for interacting with caching systems through the Cache Service. This reference documents the core interfaces, operations, and usage patterns for working with the Cache Service, including single-tier and two-tier caching strategies.
Core Interfaces
CacheOperations
The CacheOperations
trait defines the core operations available for all cache implementations:
#![allow(unused)] fn main() { #[async_trait] pub trait CacheOperations<T: Send + Sync + Clone + 'static>: Send + Sync { /// Get a value from the cache async fn get(&self, key: &str) -> Option<T>; /// Set a value in the cache with optional TTL async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError>; /// Delete a value from the cache async fn delete(&self, key: &str) -> Result<bool, CacheError>; /// Clear the entire cache async fn clear(&self) -> Result<(), CacheError>; /// Get cache statistics fn stats(&self) -> CacheStats; } }
CacheProvider
The CacheProvider
trait enables creating cache instances:
#![allow(unused)] fn main() { #[async_trait] pub trait CacheProvider: Send + Sync { /// Create a new cache instance async fn create_cache<T: Send + Sync + Clone + 'static>( &self, config: CacheConfig ) -> Result<Box<dyn CacheOperations<T>>, CacheError>; /// Check if this provider supports the given configuration fn supports(&self, config: &CacheConfig) -> bool; /// Get the name of this provider fn name(&self) -> &str; } }
CacheService
The CacheService
manages cache instances:
#![allow(unused)] fn main() { pub struct CacheService { provider_registry: Arc<RwLock<CacheProviderRegistry>>, config_by_resource: HashMap<String, CacheConfig>, default_config: CacheConfig, } impl CacheService { pub fn new(registry: CacheProviderRegistry) -> Self { // Implementation details... } pub async fn create_cache<T: Send + Sync + Clone + 'static>( &self, resource_name: &str ) -> Result<Box<dyn CacheOperations<T>>, CacheError> { // Implementation details... } pub async fn create_two_tier_cache<T: Send + Sync + Clone + 'static>( &self, resource_name: &str, config: TwoTierCacheConfig ) -> Result<TwoTierCache<T>, CacheError> { // Implementation details... } } }
TwoTierCache
The TwoTierCache
provides multi-level caching capabilities:
#![allow(unused)] fn main() { pub struct TwoTierCache<T> { fast_cache: Box<dyn CacheOperations<T>>, slow_cache: Box<dyn CacheOperations<T>>, promote_on_get: bool, fast_ttl: Option<Duration>, } #[async_trait] impl<T: Send + Sync + Clone + 'static> CacheOperations<T> for TwoTierCache<T> { // Implementation of CacheOperations... } }
Using the Cache API
Accessing the Cache Service
The cache service is accessible through the application's service registry:
#![allow(unused)] fn main() { use crate::core::services::ServiceRegistry; use crate::core::services::cache_service::CacheService; // Get the service from service registry let cache_service = service_registry.get::<CacheService>(); // Create a typed cache for users let user_cache = cache_service.create_cache::<UserDto>("users").await?; }
Basic Cache Operations
Setting Values
#![allow(unused)] fn main() { use std::time::Duration; // Create a user let user = UserDto { id: "user-123".to_string(), name: "Alice".to_string(), email: "[email protected]".to_string(), }; // Cache with 5 minute TTL user_cache.set(&user.id, user.clone(), Some(Duration::from_secs(300))).await?; // Cache with default TTL user_cache.set(&user.id, user.clone(), None).await?; }
Getting Values
#![allow(unused)] fn main() { // Get a user from cache if let Some(user) = user_cache.get("user-123").await { println!("Found user: {}", user.name); } else { println!("User not in cache"); // Fetch from database and cache let user = db.get_user("user-123").await?; user_cache.set("user-123", user.clone(), None).await?; } }
Deleting Values
#![allow(unused)] fn main() { // Delete a cached user let deleted = user_cache.delete("user-123").await?; if deleted { println!("User removed from cache"); } else { println!("User was not in cache"); } }
Clearing Cache
#![allow(unused)] fn main() { // Clear the entire cache user_cache.clear().await?; }
Getting Cache Statistics
#![allow(unused)] fn main() { // Get cache statistics let stats = user_cache.stats(); println!("Cache stats - Hits: {}, Misses: {}, Size: {}", stats.hits, stats.misses, stats.size); }
Using Two-Tier Cache
#![allow(unused)] fn main() { use crate::core::services::cache_service::TwoTierCacheConfig; // Configure two-tier cache let config = TwoTierCacheConfig::new() .with_fast_provider("memory") .with_slow_provider("redis") .with_fast_ttl(Duration::from_secs(60)) .with_slow_ttl(Duration::from_secs(3600)) .with_promotion_enabled(true); // Create a two-tier cache for products let product_cache = cache_service .create_two_tier_cache::<ProductDto>("products", config) .await?; // Use it like a regular cache product_cache.set("product-123", product, None).await?; // This will check fast cache first, then slow cache, and promote if found if let Some(product) = product_cache.get("product-123").await { println!("Found product: {}", product.name); } }
Available Cache Providers
MemoryCacheProvider
The in-memory cache provider uses Moka for high-performance caching:
#![allow(unused)] fn main() { use crate::core::services::memory_cache::MemoryCacheProvider; // Create a provider let provider = MemoryCacheProvider::new(); // Create a cache instance let config = CacheConfig::default() .with_name("user-cache") .with_ttl(Duration::from_secs(300)) .with_max_size(10000); let cache = provider.create_cache::<UserDto>(config).await?; }
RedisCacheProvider
The Redis provider is used for distributed caching:
#![allow(unused)] fn main() { use crate::core::services::redis_cache::RedisCacheProvider; // Create a provider with connection string let provider = RedisCacheProvider::new("redis://localhost:6379"); // Create a cache instance let config = CacheConfig::default() .with_name("product-cache") .with_ttl(Duration::from_secs(3600)); let cache = provider.create_cache::<ProductDto>(config).await?; }
Configuration
The Cache Service can be configured in config/default.yaml
:
# Cache configuration
cache:
# Default provider to use
default_provider: memory
# Provider-specific configurations
providers:
memory:
enabled: true
max_size: 10000
ttl_seconds: 300
redis:
enabled: true
connection_string: redis://localhost:6379
ttl_seconds: 3600
# Resource-specific cache configurations
resources:
users:
provider: memory
ttl_seconds: 60
max_size: 1000
products:
provider: redis
ttl_seconds: 1800
# Two-tier cache configurations
two_tier:
users:
fast_provider: memory
slow_provider: redis
fast_ttl_seconds: 60
slow_ttl_seconds: 3600
promote_on_get: true
Error Handling
The Cache API uses CacheError
for error handling:
#![allow(unused)] fn main() { // Example error handling match user_cache.set("user-123", user, None).await { Ok(_) => { println!("User cached successfully"); }, Err(e) => { match e { CacheError::ConnectionError { message, .. } => { // Handle connection error println!("Cache connection error: {}", message); }, CacheError::SerializationError { message } => { // Handle serialization error println!("Cache serialization error: {}", message); }, _ => { // Handle other errors println!("Cache error: {}", e); } } } } }
Cache-Aside Pattern
The cache-aside pattern is a common caching strategy:
#![allow(unused)] fn main() { async fn get_user_with_cache( id: &str, cache: &Box<dyn CacheOperations<UserDto>>, db: &Database ) -> Result<UserDto, ServiceError> { // Try to get from cache first if let Some(user) = cache.get(id).await { return Ok(user); } // If not in cache, get from database let user_json = db.get("users", id).await? .ok_or_else(|| ServiceError::not_found(format!("User not found: {}", id)))?; let user: UserDto = serde_json::from_str(&user_json)?; // Store in cache for next time with 5 minute TTL let _ = cache.set(id, user.clone(), Some(Duration::from_secs(300))).await; Ok(user) } }
Implementing a Custom Provider
You can implement your own cache provider by implementing the CacheProvider
trait:
#![allow(unused)] fn main() { use crate::core::services::cache_provider::{ CacheOperations, CacheProvider, CacheError, CacheStats }; use crate::core::services::cache_service::CacheConfig; use async_trait::async_trait; use std::time::Duration; use std::marker::PhantomData; // Custom cache implementation pub struct CustomCache<T> { // Implementation details... config: CacheConfig, _phantom: PhantomData<T>, } impl<T> CustomCache<T> { fn new(config: CacheConfig) -> Self { Self { config, _phantom: PhantomData, } } } #[async_trait] impl<T: Send + Sync + Clone + 'static> CacheOperations<T> for CustomCache<T> { async fn get(&self, key: &str) -> Option<T> { // Implementation... None } async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError> { // Implementation... Ok(()) } async fn delete(&self, key: &str) -> Result<bool, CacheError> { // Implementation... Ok(false) } async fn clear(&self) -> Result<(), CacheError> { // Implementation... Ok(()) } fn stats(&self) -> CacheStats { // Implementation... CacheStats { hits: 0, misses: 0, size: 0, max_size: self.config.max_size, } } } // Custom provider pub struct CustomCacheProvider; #[async_trait] impl CacheProvider for CustomCacheProvider { async fn create_cache<T: Send + Sync + Clone + 'static>( &self, config: CacheConfig ) -> Result<Box<dyn CacheOperations<T>>, CacheError> { Ok(Box::new(CustomCache::<T>::new(config))) } fn supports(&self, config: &CacheConfig) -> bool { config.provider_type == "custom" } fn name(&self) -> &str { "custom" } } }
Register your custom provider:
#![allow(unused)] fn main() { let mut registry = CacheProviderRegistry::new(); registry.register(Box::new(CustomCacheProvider)); let service = CacheService::new(registry); }
Best Practices
Cache Key Management
- Use consistent key generation strategies
- Include resource type in keys to prevent collisions
- Consider using prefixed keys (e.g.,
user:123
) - Keep keys short but descriptive
Time-to-Live (TTL) Strategies
- Use shorter TTLs for frequently changing data
- Use longer TTLs for relatively static data
- Set appropriate TTLs for different resources
- Consider different TTLs for different cache tiers
Cache Invalidation
- Invalidate cache entries when source data changes
- Use versioned keys for complex invalidation scenarios
- Consider bulk invalidation for related data
- Implement cache consistency strategies
Error Handling
- Fail gracefully when cache operations fail
- Don't let cache failures affect critical operations
- Log cache errors without disrupting user operations
- Add circuit breakers for unreliable cache systems
Cache Size Management
- Set appropriate maximum sizes for memory caches
- Monitor cache usage and adjust limits as needed
- Use eviction policies that match your access patterns
- Consider separate caches for different resource types
Performance Considerations
- Use bulk operations when possible
- Select appropriate serialization formats
- Balance cache TTLs against freshness requirements
- Monitor cache hit/miss rates
- Use two-tier caching for frequently accessed data
- Consider data compression for large cached items
Related Documentation
title: "Configuration API Reference" description: "Comprehensive reference for the Navius Configuration API" category: reference tags:
- api
- configuration
- settings related:
- ../patterns/feature-selection-pattern.md
- ../../02_examples/configuration-example.md
- ./application-api.md last_updated: March 31, 2025 version: 1.0
Configuration API Reference
Overview
The Configuration API in Navius provides a comprehensive set of interfaces and methods for managing application configuration, including loading configuration from multiple sources, accessing configuration values in a type-safe manner, and managing environment-specific settings.
Core Interfaces
ConfigLoader
The ConfigLoader
trait defines the interface for loading configuration from various sources:
#![allow(unused)] fn main() { pub trait ConfigLoader: Send + Sync { /// Load configuration from a source fn load(&self) -> Result<ConfigValues, ConfigError>; /// Priority of this loader (higher numbers have higher precedence) fn priority(&self) -> i32; } }
ConfigService
The ConfigService
provides methods for accessing configuration values:
#![allow(unused)] fn main() { pub trait ConfigService: Send + Sync { /// Get a string value fn get_string(&self, key: &str) -> Result<String, ConfigError>; /// Get an integer value fn get_int(&self, key: &str) -> Result<i64, ConfigError>; /// Get a float value fn get_float(&self, key: &str) -> Result<f64, ConfigError>; /// Get a boolean value fn get_bool(&self, key: &str) -> Result<bool, ConfigError>; /// Get a complex value as a specific type fn get<T: DeserializeOwned>(&self, key: &str) -> Result<T, ConfigError>; /// Check if a configuration key exists fn has(&self, key: &str) -> bool; } }
ConfigValues
The ConfigValues
struct represents a hierarchical configuration structure:
#![allow(unused)] fn main() { pub struct ConfigValues { values: HashMap<String, ConfigValue>, } pub enum ConfigValue { String(String), Integer(i64), Float(f64), Boolean(bool), Array(Vec<ConfigValue>), Object(HashMap<String, ConfigValue>), Null, } }
Implementation Details
Configuration Flow
The following diagram illustrates how configuration is loaded and accessed in a Navius application:
flowchart TD A[Configuration Sources] --> B[ConfigLoader] A --> C[Environment Variables] A --> D[Configuration Files] A --> E[Command Line Arguments] A --> F[Remote Sources] B --> G[ConfigManager] C --> G D --> G E --> G F --> G G --> H[ConfigValues] H --> I[ConfigService] I --> J[Application Components] I --> K[Type-Safe Configuration] K --> L[DatabaseConfig] K --> M[ServerConfig] K --> N[LoggingConfig]
Loading Process
When configuration is loaded, the following process occurs:
- Each
ConfigLoader
loads configuration from its source - The
ConfigManager
merges all configurations based on priority - Higher priority values override lower priority values
- The merged configuration is made available through the
ConfigService
- Application components access the configuration via the
ConfigService
Common Operations
Loading Configuration
#![allow(unused)] fn main() { // Create configuration loaders let file_loader = FileConfigLoader::new("config")?; let env_loader = EnvConfigLoader::new("APP_")?; // Create configuration manager with loaders let config_manager = ConfigManager::new(vec![ Box::new(file_loader), Box::new(env_loader), ]); // Load configuration let config = config_manager.load()?; }
Accessing Configuration Values
#![allow(unused)] fn main() { // Get simple values let server_host = config.get_string("server.host")?; let server_port = config.get_int("server.port")?; let debug_mode = config.get_bool("app.debug")?; // Get typed configuration sections let database_config = config.get::<DatabaseConfig>("database")?; let logging_config = config.get::<LoggingConfig>("logging")?; }
Checking for Configuration Values
#![allow(unused)] fn main() { if config.has("feature.new_ui") { let enable_new_ui = config.get_bool("feature.new_ui")?; // Use the configuration value } }
Working with Environment-Specific Configuration
#![allow(unused)] fn main() { // Get current environment let environment = config.get_string("app.environment") .unwrap_or_else(|_| "development".to_string()); // Get environment-specific configuration key let config_key = format!("{}.{}", environment, "database.url"); if config.has(&config_key) { let db_url = config.get_string(&config_key)?; // Use environment-specific database URL } else { // Fall back to default database URL let db_url = config.get_string("database.url")?; } }
Configuration Service API
The ConfigService
trait provides methods for accessing configuration values in a type-safe manner.
ConfigService Methods
Method | Description | Parameters | Return Type |
---|---|---|---|
get_string | Get a string value | key: &str | Result<String, ConfigError> |
get_int | Get an integer value | key: &str | Result<i64, ConfigError> |
get_float | Get a float value | key: &str | Result<f64, ConfigError> |
get_bool | Get a boolean value | key: &str | Result<bool, ConfigError> |
get<T> | Get a complex value as a specific type | key: &str | Result<T, ConfigError> |
has | Check if a configuration key exists | key: &str | bool |
Error Handling
The configuration API returns ConfigError
for any operation that fails:
#![allow(unused)] fn main() { pub enum ConfigError { /// The specified key does not exist KeyNotFound(String), /// The value exists but has the wrong type TypeError { key: String, expected: String, actual: String, }, /// Failed to load configuration from a source LoadError { source: String, reason: String, }, /// Failed to parse configuration ParseError { source: String, reason: String, }, } }
Example Usage
#![allow(unused)] fn main() { fn configure_database(config: &dyn ConfigService) -> Result<DatabasePool, Error> { // Get database configuration let db_type = config.get_string("database.type")?; let db_url = config.get_string("database.url")?; let max_connections = config.get_int("database.max_connections").unwrap_or(10) as u32; // Create database pool based on type match db_type.as_str() { "postgres" => { DatabasePool::postgres(db_url, max_connections) }, "mysql" => { DatabasePool::mysql(db_url, max_connections) }, _ => { Err(Error::config_error(format!("Unsupported database type: {}", db_type))) } } } }
Configuration Loaders
Navius provides several built-in configuration loaders:
FileConfigLoader
Loads configuration from YAML, JSON, or TOML files:
#![allow(unused)] fn main() { // Create a file loader that looks for files in the specified directory let file_loader = FileConfigLoader::new("config")?; // Load configuration from default.yaml, environment-specific files, and local overrides let config = file_loader.load()?; }
EnvConfigLoader
Loads configuration from environment variables:
#![allow(unused)] fn main() { // Create an environment variable loader with a specific prefix let env_loader = EnvConfigLoader::new("APP_")?; // Load configuration from environment variables let config = env_loader.load()?; // Example: APP_SERVER_HOST=localhost becomes server.host=localhost }
RemoteConfigLoader
Loads configuration from a remote service:
#![allow(unused)] fn main() { // Create a remote configuration loader let remote_loader = RemoteConfigLoader::new( "https://config.example.com/api/config", None, Duration::from_secs(300), // Cache TTL )?; // Load configuration from the remote service let config = remote_loader.load()?; }
Creating Custom Loaders
You can create custom configuration loaders by implementing the ConfigLoader
trait:
#![allow(unused)] fn main() { struct DatabaseConfigLoader { connection_string: String, priority: i32, } impl ConfigLoader for DatabaseConfigLoader { fn load(&self) -> Result<ConfigValues, ConfigError> { // Implementation to load configuration from a database // ... } fn priority(&self) -> i32 { self.priority } } }
Working with ConfigValues
Navigating Nested Configuration
#![allow(unused)] fn main() { // Create a path to a nested value let path = vec!["database", "connections", "primary", "host"]; // Navigate to the nested value let mut current = &config_values; for segment in &path { match current { ConfigValue::Object(map) => { if let Some(value) = map.get(*segment) { current = value; } else { return Err(ConfigError::KeyNotFound(path.join("."))); } }, _ => { return Err(ConfigError::TypeError { key: path[..idx].join("."), expected: "object".to_string(), actual: current.type_name().to_string(), }); } } } // Extract the final value match current { ConfigValue::String(s) => Ok(s.clone()), _ => Err(ConfigError::TypeError { key: path.join("."), expected: "string".to_string(), actual: current.type_name().to_string(), }), } }
Converting Configuration to Structs
#![allow(unused)] fn main() { // Define a configuration struct #[derive(Debug, Deserialize)] pub struct ServerConfig { pub host: String, pub port: u16, pub tls: Option<TlsConfig>, } #[derive(Debug, Deserialize)] pub struct TlsConfig { pub cert_file: String, pub key_file: String, } // Extract the configuration let server_config: ServerConfig = config.get("server")?; }
Dynamic Configuration Reloading
WatchableConfigService
The WatchableConfigService
allows for dynamic reloading of configuration:
#![allow(unused)] fn main() { // Create a watchable configuration service let config_service = WatchableConfigService::new(config_manager); // Start watching for changes (e.g., file changes) config_service.start_watching(Duration::from_secs(30))?; // Register a callback for configuration changes config_service.on_reload(|new_config| { log::info!("Configuration reloaded"); // Perform actions with the new configuration }); }
Using Configuration Subscribers
#![allow(unused)] fn main() { // Define a configuration subscriber pub struct LoggingConfigSubscriber { logger: Arc<dyn Logger>, } impl ConfigSubscriber for LoggingConfigSubscriber { fn on_config_change(&self, config: &dyn ConfigService) { if let Ok(log_level) = config.get_string("logging.level") { self.logger.set_level(&log_level); } if let Ok(log_format) = config.get_string("logging.format") { self.logger.set_format(&log_format); } } } // Register the subscriber config_service.add_subscriber(Box::new(LoggingConfigSubscriber { logger: logger.clone(), })); }
Best Practices
- Use Typed Configuration: Define structs for configuration sections to ensure type safety.
- Provide Sensible Defaults: Always handle the case where configuration values are missing.
- Environment Overrides: Allow environment variables to override file-based configuration.
- Validation: Validate configuration at startup to fail fast if required values are missing or invalid.
- Centralize Configuration Access: Use a single configuration service throughout the application.
- Document Configuration: Keep a reference of all available configuration options and their meanings.
Common Patterns
Feature Flags
#![allow(unused)] fn main() { // Check if a feature is enabled if config.get_bool("features.new_ui").unwrap_or(false) { // Use new UI components } else { // Use classic UI components } }
Environment-Specific Configuration
#![allow(unused)] fn main() { // Get current environment let env = config.get_string("app.environment").unwrap_or_else(|_| "development".to_string()); // Create environment-specific key let db_url_key = format!("environments.{}.database.url", env); // Get environment-specific value with fallback let db_url = if config.has(&db_url_key) { config.get_string(&db_url_key)? } else { config.get_string("database.url")? } }
Connection Strings
#![allow(unused)] fn main() { // Build a connection string from configuration components let host = config.get_string("database.host")?; let port = config.get_int("database.port")?; let name = config.get_string("database.name")?; let user = config.get_string("database.user")?; let password = config.get_string("database.password")?; let connection_string = format!( "postgres://{}:{}@{}:{}/{}", user, password, host, port, name ); }
Secrets Management
#![allow(unused)] fn main() { // Create a configuration manager with a secrets loader let config_manager = ConfigManager::new(vec![ Box::new(FileConfigLoader::new("config")?), Box::new(EnvConfigLoader::new("APP_")?), Box::new(SecretsConfigLoader::new("/run/secrets")?), ]); // Load configuration including secrets let config = config_manager.load()?; // Access a secret let api_key = config.get_string("secrets.api_key")?; }
Troubleshooting
Common Issues
Issue | Possible Cause | Solution |
---|---|---|
Configuration not loading | Incorrect file path | Check that configuration files are in the expected location |
Missing configuration value | Key not specified in config | Provide a default value or validate at startup |
Wrong type for configuration value | Incorrect type specification | Use the appropriate getter method (get_string, get_int, etc.) |
Environment-specific config not applied | Wrong environment name | Verify app.environment setting and environment-specific file names |
Circular dependencies in configuration | Configuration references itself | Break circular dependencies by refactoring configuration structure |
Configuration Load Order Issues
If configuration values are not being overridden as expected:
- Check the priority values of your loaders (higher priority loaders override lower priority ones)
- Verify file naming conventions for environment-specific files
- Check that environment variables are properly formatted with the expected prefix
- Use the
has
method to check if a key exists before attempting to access it
Debugging Configuration
#![allow(unused)] fn main() { // Print all configuration keys and values (for debugging only) fn debug_print_config(config: &dyn ConfigService, prefix: &str) { // This is a simplified example - in a real app, you'd need to recursively // traverse the configuration structure println!("Configuration (prefix: {})", prefix); // Print basic values if they exist for key in ["host", "port", "username", "enabled"] { let full_key = if prefix.is_empty() { key.to_string() } else { format!("{}.{}", prefix, key) }; if config.has(&full_key) { if let Ok(value) = config.get_string(&full_key) { println!(" {} = {}", full_key, value); } else if let Ok(value) = config.get_int(&full_key) { println!(" {} = {}", full_key, value); } else if let Ok(value) = config.get_bool(&full_key) { println!(" {} = {}", full_key, value); } } } } }
Validating Configuration
#![allow(unused)] fn main() { // Validate required configuration keys fn validate_config(config: &dyn ConfigService) -> Result<(), ConfigError> { // Check required keys for key in ["server.host", "server.port", "database.url"] { if !config.has(key) { return Err(ConfigError::KeyNotFound(key.to_string())); } } // Validate value ranges let port = config.get_int("server.port")?; if port < 1 || port > 65535 { return Err(ConfigError::ParseError { source: "server.port".to_string(), reason: format!("Port must be between 1 and 65535, got {}", port), }); } // Check conditional requirements if config.get_bool("tls.enabled").unwrap_or(false) { for key in ["tls.cert_file", "tls.key_file"] { if !config.has(key) { return Err(ConfigError::KeyNotFound( format!("{} is required when TLS is enabled", key) )); } } } Ok(()) } }
Reference
Configuration Key Naming Conventions
- Use lowercase names
- Use dots for nesting (e.g.,
database.host
) - Use underscores for multi-word names (e.g.,
http_server.max_connections
) - Group related settings (e.g.,
database.*
,logging.*
)
Standard Configuration Keys
Key | Type | Description |
---|---|---|
app.name | String | Application name |
app.environment | String | Deployment environment (development, test, production) |
app.version | String | Application version |
server.host | String | Server hostname or IP address |
server.port | Integer | Server port number |
database.type | String | Database type (postgres, mysql, etc.) |
database.url | String | Database connection URL |
database.max_connections | Integer | Maximum database connections |
logging.level | String | Logging level (debug, info, warn, error) |
logging.format | String | Logging format (json, text, etc.) |
features.* | Boolean | Feature flags |
Configuration File Formats
Navius supports the following configuration file formats:
- YAML:
.yaml
or.yml
files - JSON:
.json
files - TOML:
.toml
files - Environment Files:
.env
files
Configuration File Loading Order
default.*
(base configuration){environment}.*
(environment-specific overrides)local.*
(local development overrides, not committed to version control)- Environment variables
- Command-line arguments
Related Documents
title: "Health API Reference" description: "API documentation for Navius health monitoring endpoints" category: api tags:
- api
- health
- monitoring
- actuator related:
- ../patterns/health-check-pattern.md
- ../../02_examples/health-service-example.md
- api-resource.md last_updated: March 31, 2025 version: 1.0
Health API Reference
Overview
The Health API provides endpoints for monitoring the operational status of the application and its dependencies. It offers different levels of detail for different consumers, from simple UP/DOWN status for load balancers to detailed component status for administrators.
Base URL
All health endpoints are accessible under the /actuator
path prefix.
Authentication
- Basic health status (
/actuator/health
) is publicly accessible by default - Detailed health information (
/actuator/health/detail
) requires authentication with ADMIN role - Health dashboard (
/actuator/dashboard
) requires authentication with ADMIN role
Authentication requirements can be configured in config/default.yaml
under the api.security.endpoints
section.
Endpoints
Health Status
GET /actuator/health
Returns the overall health status of the application.
Response Format
{
"status": "UP",
"timestamp": "2024-03-26T12:34:56.789Z"
}
Status Values
UP
: The application is functioning normallyDOWN
: The application is not functioningDEGRADED
: The application is functioning with reduced capabilitiesUNKNOWN
: The application status cannot be determined
Response Codes
200 OK
: The application is UP or DEGRADED503 Service Unavailable
: The application is DOWN500 Internal Server Error
: An error occurred checking health
Detailed Health Status
GET /actuator/health/detail
Returns detailed health information for all components.
Response Format
{
"status": "UP",
"timestamp": "2024-03-26T12:34:56.789Z",
"components": [
{
"name": "database",
"status": "UP",
"details": {
"type": "postgres",
"version": "14.5",
"connection_pool": "10/20",
"response_time_ms": 5
}
},
{
"name": "redis-cache",
"status": "UP",
"details": {
"used_memory": "1.2GB",
"uptime": "3d",
"clients_connected": 5
}
},
{
"name": "disk-space",
"status": "UP",
"details": {
"total": "100GB",
"free": "75GB",
"threshold": "10GB"
}
},
{
"name": "external-api",
"status": "DOWN",
"details": {
"error": "Connection timeout",
"url": "https://api.example.com/status",
"last_successful_check": "2024-03-26T10:15:30.000Z"
}
}
]
}
Response Codes
200 OK
: Health information retrieved successfully401 Unauthorized
: Authentication required403 Forbidden
: Insufficient permissions500 Internal Server Error
: An error occurred checking health
Component Health
GET /actuator/health/component/{component-name}
Returns detailed health information for a specific component.
Parameters
component-name
: Name of the component to check
Response Format
{
"name": "database",
"status": "UP",
"timestamp": "2024-03-26T12:34:56.789Z",
"details": {
"type": "postgres",
"version": "14.5",
"connection_pool": "10/20",
"response_time_ms": 5
}
}
Response Codes
200 OK
: Component health retrieved successfully401 Unauthorized
: Authentication required403 Forbidden
: Insufficient permissions404 Not Found
: Component not found500 Internal Server Error
: An error occurred checking health
Health Dashboard
GET /actuator/dashboard
Returns health history and trend data.
Query Parameters
duration
: Time period to show history for (default:1h
, options:5m
,15m
,1h
,6h
,1d
,7d
)component
: Optional component name to filter by
Response Format
{
"current_status": "UP",
"timestamp": "2024-03-26T12:34:56.789Z",
"uptime_percentage": 99.8,
"last_outage": "2024-03-25T08:15:20.000Z",
"history": [
{
"timestamp": "2024-03-26T12:30:00.000Z",
"status": "UP",
"components": {
"database": "UP",
"redis-cache": "UP",
"disk-space": "UP",
"external-api": "DOWN"
}
},
{
"timestamp": "2024-03-26T12:25:00.000Z",
"status": "UP",
"components": {
"database": "UP",
"redis-cache": "UP",
"disk-space": "UP",
"external-api": "DOWN"
}
}
// Additional history entries...
],
"components": [
{
"name": "database",
"current_status": "UP",
"uptime_percentage": 100.0,
"last_outage": null
},
{
"name": "redis-cache",
"current_status": "UP",
"uptime_percentage": 100.0,
"last_outage": null
},
{
"name": "disk-space",
"current_status": "UP",
"uptime_percentage": 100.0,
"last_outage": null
},
{
"name": "external-api",
"current_status": "DOWN",
"uptime_percentage": 82.5,
"last_outage": "2024-03-26T10:15:00.000Z"
}
]
}
Response Codes
200 OK
: Dashboard data retrieved successfully400 Bad Request
: Invalid parameters401 Unauthorized
: Authentication required403 Forbidden
: Insufficient permissions500 Internal Server Error
: An error occurred retrieving dashboard data
Clear Dashboard History
POST /actuator/dashboard/clear
Clears the health dashboard history.
Response Format
{
"message": "Health history cleared successfully",
"timestamp": "2024-03-26T12:34:56.789Z"
}
Response Codes
200 OK
: History cleared successfully401 Unauthorized
: Authentication required403 Forbidden
: Insufficient permissions500 Internal Server Error
: An error occurred clearing history
Data Models
Health Status
Field | Type | Description |
---|---|---|
status | string | Overall health status (UP, DOWN, etc.) |
timestamp | string | ISO-8601 timestamp of the health check |
Component Health
Field | Type | Description |
---|---|---|
name | string | Name of the component |
status | string | Component health status (UP, DOWN, etc.) |
details | object | Component-specific health details |
Health History Entry
Field | Type | Description |
---|---|---|
timestamp | string | ISO-8601 timestamp of the history entry |
status | string | Overall health status at that time |
components | object | Status of each component at that time |
Error Responses
Standard Error Format
{
"error": {
"code": "HEALTH_CHECK_FAILED",
"message": "Failed to check health of component: database",
"details": {
"component": "database",
"reason": "Connection timeout"
}
}
}
Common Error Codes
Code | Description |
---|---|
HEALTH_CHECK_FAILED | Failed to check health of one or more components |
COMPONENT_NOT_FOUND | Requested component does not exist |
INVALID_PARAMETER | Invalid parameter provided |
INSUFFICIENT_PERMISSIONS | User does not have required permissions |
Usage Examples
Check Basic Health Status
curl -X GET http://localhost:3000/actuator/health
Check Detailed Health Status (Authenticated)
curl -X GET http://localhost:3000/actuator/health/detail \
-H "Authorization: Bearer YOUR_JWT_TOKEN"
Check Specific Component Health
curl -X GET http://localhost:3000/actuator/health/component/database \
-H "Authorization: Bearer YOUR_JWT_TOKEN"
Get Health Dashboard for Last Hour
curl -X GET http://localhost:3000/actuator/dashboard?duration=1h \
-H "Authorization: Bearer YOUR_JWT_TOKEN"
Clear Dashboard History
curl -X POST http://localhost:3000/actuator/dashboard/clear \
-H "Authorization: Bearer YOUR_JWT_TOKEN"
Configuration
The Health API can be configured in config/default.yaml
:
# Health monitoring configuration
health:
# Enable/disable health monitoring
enabled: true
# Check interval in seconds
check_interval_seconds: 60
# Maximum history entries to keep
max_history_size: 1000
# Controls which components are enabled
components:
database: true
redis: true
disk_space: true
external_apis: true
# Security settings
security:
# Whether detailed health info requires authentication
require_auth_for_detail: true
# Required role for detailed health info
detail_role: "ADMIN"
Implementing Custom Health Indicators
You can implement custom health indicators by implementing the HealthIndicator
trait:
#![allow(unused)] fn main() { use crate::core::services::health::{HealthIndicator, DependencyStatus}; use std::sync::Arc; use std::collections::HashMap; use crate::core::router::AppState; pub struct CustomHealthIndicator { // Your custom fields } impl HealthIndicator for CustomHealthIndicator { fn name(&self) -> String { "custom-component".to_string() } fn check_health(&self, _state: &Arc<AppState>) -> DependencyStatus { // Implement your health check logic if check_passes() { DependencyStatus::up() .with_detail("version", "1.0") .with_detail("custom_metric", "value") } else { DependencyStatus::down() .with_detail("error", "Check failed") } } fn is_critical(&self) -> bool { false // Set to true if this component is critical } } }
Register your custom indicator with the health service:
#![allow(unused)] fn main() { let mut health_service = service_registry.get_mut::<HealthService>(); health_service.register_indicator(Box::new(CustomHealthIndicator::new())); }
Related Documentation
title: "Router API Reference" description: "Reference documentation for defining and managing routes in Navius applications using Axum." category: reference tags:
- api
- routing
- router
- axum related:
- application-api.md
- ../../04_guides/features/api-integration.md
- ../../02_examples/middleware-example.md last_updated: March 31, 2025 version: 1.0
Router API Reference
Overview
This document provides reference information on how routing is handled within Navius applications, primarily leveraging the Axum framework's Router
. Routing defines how incoming HTTP requests are directed to specific handler functions based on their path and method.
Core Concepts
Routing is managed within the Application
struct, which holds the main Axum Router
.
#![allow(unused)] fn main() { // From Application struct definition pub struct Application { // ... other fields ... router: Router, // ... other fields ... } impl Application { // ... other methods ... pub fn router(&self) -> &Router { &self.router } // ... other methods ... } }
The ApplicationBuilder
is used to configure the router when constructing the application.
Defining Routes
Routes are defined using the Axum Router
methods. You typically chain route
calls to define paths and associate them with handler functions for specific HTTP methods (like GET, POST, etc.).
#![allow(unused)] fn main() { use axum::{routing::{get, post}, Router}; use crate::app::handlers::{index_handler, users::{get_users, create_user, get_user_by_id}}; // Example handler paths // Create a router let router = Router::new() .route("/", get(index_handler)) // Route for the root path, GET method .route("/users", get(get_users).post(create_user)) // Route for /users, GET and POST methods .route("/users/:id", get(get_user_by_id)); // Route with a path parameter :id // Create an application with the router (simplified builder usage) let app = ApplicationBuilder::new("my-app") .with_router(router) .build()?; }
Route Groups (Nesting)
You can group related routes under a common path prefix using nest
. This helps organize larger applications.
#![allow(unused)] fn main() { use axum::{routing::get, Router}; use crate::app::handlers::users::{get_users, create_user, get_user_by_id, update_user, delete_user}; // Example handler paths use crate::app::middleware::auth_middleware; // Example middleware path use axum::middleware; // Define routes for an API version fn api_v1_router() -> Router { Router::new() .route("/users", get(get_users).post(create_user)) .route("/users/:id", get(get_user_by_id).put(update_user).delete(delete_user)) // ... other v1 routes } // Nest the API router and apply middleware let main_router = Router::new() .route("/", get(index_handler)) // Public route .nest("/api/v1", api_v1_router()) // Nest all v1 routes under /api/v1 .layer(middleware::from_fn(auth_middleware)); // Apply auth middleware to relevant parts let app = ApplicationBuilder::new("my-app") .with_router(main_router) .build()?; }
Router Configuration
Route Parameters
Axum supports extracting parameters from routes using path variables:
#![allow(unused)] fn main() { // Define a route with a path parameter app.route("/users/:id", get(get_user_by_id)); // In the handler function, extract the parameter async fn get_user_by_id(Path(id): Path<String>) -> impl IntoResponse { // Use the extracted id // ... } }
Route Fallbacks
You can define fallback routes to handle requests that don't match any defined routes:
#![allow(unused)] fn main() { let app = Router::new() .route("/", get(index_handler)) .route("/users", get(get_users)) // Define a fallback for routes that don't match .fallback(handle_404); async fn handle_404() -> impl IntoResponse { (StatusCode::NOT_FOUND, "Resource not found") } }
Middleware Integration
Middleware can be applied to the entire router or specific routes using .layer()
. This allows for cross-cutting concerns like authentication, logging, and error handling.
#![allow(unused)] fn main() { // Apply middleware to all routes let app = Router::new() .route("/", get(index_handler)) .route("/users", get(get_users)) .layer(middleware::from_fn(logging_middleware)) .layer(middleware::from_fn(error_handling_middleware)); // Apply middleware to specific routes or route groups let protected_routes = Router::new() .route("/profile", get(profile_handler)) .route("/settings", get(settings_handler)) .layer(middleware::from_fn(auth_middleware)); let app = Router::new() .route("/", get(index_handler)) // Public route .nest("/account", protected_routes) // Protected routes with auth middleware .layer(middleware::from_fn(logging_middleware)); // Global middleware }
Implementation Details
Router Lifecycle
- Route Definition: Routes are defined using
Router::new()
and.route()
methods - Router Configuration: The router is configured with middleware, fallbacks, and nested routes
- Router Integration: The router is passed to the
ApplicationBuilder
- Request Handling:
- Incoming request is received
- Global middleware is applied
- Route matching is performed
- Route-specific middleware is applied
- Handler function is executed
- Response is returned through the middleware chain
Request Flow Diagram
flowchart TD A[Incoming Request] --> B[Global Middleware] B --> C{Route Matching} C -->|Match Found| D[Route-Specific Middleware] C -->|No Match| E[Fallback Handler] D --> F[Handler Function] F --> G[Response] E --> G G --> H[Return Through Middleware]
Best Practices
- Route Organization: Group related routes using nested routers
- Route Naming: Use RESTful conventions for route naming
- Middleware Application: Apply middleware at the appropriate level (global vs. route-specific)
- Error Handling: Implement proper error handling middleware
- Security: Apply authentication and authorization middleware to protected routes
Examples
Basic API Router
#![allow(unused)] fn main() { use axum::{ routing::{get, post, put, delete}, Router, }; use crate::app::handlers::api; pub fn create_api_router() -> Router { Router::new() .route("/health", get(api::health_check)) .route("/products", get(api::get_products).post(api::create_product)) .route("/products/:id", get(api::get_product_by_id) .put(api::update_product) .delete(api::delete_product) ) } }
Versioned API Router
#![allow(unused)] fn main() { use axum::{ routing::{get, post}, Router, }; use crate::app::handlers::{api_v1, api_v2}; pub fn create_versioned_api_router() -> Router { // V1 API routes let v1_routes = Router::new() .route("/users", get(api_v1::get_users).post(api_v1::create_user)) .route("/users/:id", get(api_v1::get_user_by_id)); // V2 API routes let v2_routes = Router::new() .route("/users", get(api_v2::get_users).post(api_v2::create_user)) .route("/users/:id", get(api_v2::get_user_by_id)) .route("/users/:id/profile", get(api_v2::get_user_profile)); // Main router with versioned APIs Router::new() .route("/health", get(health_check)) .nest("/api/v1", v1_routes) .nest("/api/v2", v2_routes) } }
Authentication Protected Routes
#![allow(unused)] fn main() { use axum::{ routing::{get, post}, Router, middleware, }; use crate::app::middleware::auth; use crate::app::handlers::{public, protected}; pub fn create_protected_router() -> Router { // Public routes that don't require authentication let public_routes = Router::new() .route("/", get(public::index)) .route("/about", get(public::about)) .route("/login", post(public::login)); // Protected routes that require authentication let protected_routes = Router::new() .route("/dashboard", get(protected::dashboard)) .route("/profile", get(protected::profile).put(protected::update_profile)) .route("/settings", get(protected::settings).put(protected::update_settings)) .layer(middleware::from_fn(auth::require_auth)); // Combine routes Router::new() .merge(public_routes) .merge(protected_routes) } }
Troubleshooting
Common Issues
Issue | Possible Cause | Solution |
---|---|---|
Route not matching | Incorrect path pattern | Double-check route path patterns, especially parameter syntax |
Middleware not running | Incorrect middleware order | Ensure middleware is added in the correct order (first added, last executed) |
Handler receiving wrong parameters | Incorrect extractor usage | Verify parameter extractors match route definition |
Route conflicts | Multiple routes with same pattern | Check for duplicate route definitions or overlapping patterns |
Protected route accessible | Missing authentication middleware | Ensure auth middleware is applied to all protected routes |
Debugging Routes
For debugging routes during development:
#![allow(unused)] fn main() { // Print the router route table for debugging println!("Router debug: {:#?}", app); // Add tracing middleware to see route resolution let app = Router::new() .route("/", get(index_handler)) .layer(middleware::from_fn(|req, next| async { println!("Request path: {}", req.uri().path()); next.run(req).await })); }
Related Documents
title: "Two-Tier Cache API Reference" description: "Comprehensive API reference for the Two-Tier Cache implementation in Navius" category: reference tags:
- api
- caching
- two-tier
- redis
- memory-cache
- reference related:
- ../../04_guides/caching-strategies.md
- ../configuration/cache-config.md
- ../patterns/caching-patterns.md
- ../../02_examples/two-tier-cache-example.md last_updated: March 31, 2025 version: 1.0
Two-Tier Cache API Reference
This document provides a comprehensive reference for the Two-Tier Cache API in the Navius framework, including core interfaces, implementation details, and usage patterns.
Table of Contents
- Overview
- Core Interfaces
- TwoTierCache Implementation
- Factory Functions
- Configuration Options
- Error Handling
- Best Practices
- Examples
Overview
The Two-Tier Cache implementation provides a caching solution that combines a fast in-memory cache (primary) with a distributed Redis cache (secondary). This approach offers:
- High-speed access to frequently used data
- Persistence across application restarts
- Automatic promotion of items from slow to fast cache
- Configurable time-to-live (TTL) settings for both cache levels
- Graceful handling of Redis connection issues
Core Interfaces
CacheOperations
Trait
The core interface for all cache implementations:
#![allow(unused)] fn main() { pub trait CacheOperations: Send + Sync { async fn set(&self, key: &str, value: Vec<u8>, ttl: Option<Duration>) -> Result<(), AppError>; async fn get(&self, key: &str) -> Result<Vec<u8>, AppError>; async fn delete(&self, key: &str) -> Result<(), AppError>; async fn clear(&self) -> Result<(), AppError>; async fn get_many(&self, keys: &[&str]) -> Result<HashMap<String, Vec<u8>>, AppError>; fn name(&self) -> &'static str; } }
DynCacheOperations
Trait
An extension of CacheOperations
that provides typed cache access:
#![allow(unused)] fn main() { pub trait DynCacheOperations: CacheOperations { fn get_typed_cache<T: 'static>(&self) -> Box<dyn TypedCache<T>>; } }
TypedCache
Trait
A generic interface for type-safe cache operations:
#![allow(unused)] fn main() { pub trait TypedCache<T: 'static>: Send + Sync { async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), AppError>; async fn get(&self, key: &str) -> Result<T, AppError>; async fn delete(&self, key: &str) -> Result<(), AppError>; async fn clear(&self) -> Result<(), AppError>; } }
TwoTierCache Implementation
The TwoTierCache
struct implements the core cache interfaces and provides the two-tier caching functionality:
#![allow(unused)] fn main() { pub struct TwoTierCache { fast_cache: Box<dyn DynCacheOperations>, slow_cache: Box<dyn DynCacheOperations>, promote_on_get: bool, fast_ttl: Option<Duration>, slow_ttl: Option<Duration>, } }
Fields
Field | Type | Description |
---|---|---|
fast_cache | Box<dyn DynCacheOperations> | The primary in-memory cache (typically Moka-based) |
slow_cache | Box<dyn DynCacheOperations> | The secondary distributed cache (typically Redis-based) |
promote_on_get | bool | Whether to promote items from slow to fast cache on cache hits |
fast_ttl | Option<Duration> | TTL for items in the fast cache (None = use default) |
slow_ttl | Option<Duration> | TTL for items in the slow cache (None = use default) |
Methods
Constructor
#![allow(unused)] fn main() { pub fn new( fast_cache: Box<dyn DynCacheOperations>, slow_cache: Box<dyn DynCacheOperations>, promote_on_get: bool, fast_ttl: Option<Duration>, slow_ttl: Option<Duration>, ) -> Self }
Creates a new TwoTierCache
instance with the specified parameters.
Core Cache Operations
#![allow(unused)] fn main() { // Set a value in both fast and slow caches async fn set(&self, key: &str, value: Vec<u8>, ttl: Option<Duration>) -> Result<(), AppError> // Get a value, trying fast cache first, then slow cache with promotion async fn get(&self, key: &str) -> Result<Vec<u8>, AppError> // Delete a value from both caches async fn delete(&self, key: &str) -> Result<(), AppError> // Clear both caches async fn clear(&self) -> Result<(), AppError> // Get multiple values, optimizing for batch operations async fn get_many(&self, keys: &[&str]) -> Result<HashMap<String, Vec<u8>>, AppError> }
TypedCache Support
#![allow(unused)] fn main() { // Get a typed cache wrapper for working with specific types fn get_typed_cache<T: 'static>(&self) -> Box<dyn TypedCache<T>> }
Factory Functions
The framework provides several convenience functions for creating cache instances:
create_two_tier_cache
#![allow(unused)] fn main() { pub async fn create_two_tier_cache( config: &CacheConfig, metrics: Option<Arc<MetricsHandler>>, ) -> Result<Arc<Box<dyn DynCacheOperations>>, AppError> }
Creates a standard two-tier cache with:
- Fast in-memory Moka cache
- Slow Redis-based cache
- Automatic promotion from slow to fast
- Configurable TTLs for both layers
create_memory_only_two_tier_cache
#![allow(unused)] fn main() { pub async fn create_memory_only_two_tier_cache( config: &CacheConfig, metrics: Option<Arc<MetricsHandler>>, ) -> Arc<Box<dyn DynCacheOperations>> }
Creates a memory-only two-tier cache (for development/testing) with:
- Fast in-memory Moka cache
- Slow in-memory Moka cache (simulating Redis)
- Suitable for local development without Redis dependency
is_redis_available
#![allow(unused)] fn main() { pub async fn is_redis_available(redis_url: &str) -> bool }
Helper function to check if Redis is available at the specified URL.
Configuration Options
The CacheConfig
struct provides configuration options for the caching system:
#![allow(unused)] fn main() { pub struct CacheConfig { pub redis_url: String, pub redis_pool_size: usize, pub redis_timeout_ms: u64, pub redis_namespace: String, pub moka_max_capacity: u64, pub moka_time_to_live_ms: u64, pub moka_time_to_idle_ms: u64, pub redis_ttl_seconds: u64, pub cache_promotion: bool, } }
Configuration Properties
Property | Type | Description |
---|---|---|
redis_url | String | URL for the Redis connection (e.g., "redis://localhost:6379") |
redis_pool_size | usize | Number of connections in the Redis connection pool |
redis_timeout_ms | u64 | Timeout for Redis operations in milliseconds |
redis_namespace | String | Namespace prefix for all Redis keys |
moka_max_capacity | u64 | Maximum number of items in the Moka cache |
moka_time_to_live_ms | u64 | Default TTL for Moka cache items in milliseconds |
moka_time_to_idle_ms | u64 | Time after which idle items are evicted from Moka cache |
redis_ttl_seconds | u64 | Default TTL for Redis cache items in seconds |
cache_promotion | bool | Whether to promote items from slow to fast cache on get |
Error Handling
The Two-Tier Cache handles the following error scenarios:
Cache Miss
When an item is not found in either cache, a CacheError::Miss
is returned, which is then converted to an AppError::NotFound
.
Redis Connection Issues
If Redis is unavailable, operations on the slow cache will fail gracefully:
get
operations will only use the fast cacheset
operations will only update the fast cache- Error details are logged for diagnostics
Serialization Errors
If an item cannot be serialized or deserialized, a CacheError::Serialization
is returned, which is converted to an AppError::BadRequest
.
Best Practices
Optimal Usage
-
Choose Appropriate TTLs
- Fast cache TTL should be shorter than slow cache TTL
- Critical data should have shorter TTLs to ensure freshness
- Less critical data can have longer TTLs for performance
-
Key Design
- Use consistent key naming conventions
- Include type information in keys to prevent type confusion
- Use namespaces to avoid key collisions between features
-
Cache Size Management
- Set appropriate Moka cache size limits based on memory constraints
- Monitor cache hit/miss ratios to optimize size
-
Error Handling
- Always handle cache errors gracefully in application code
- Implement fallback mechanisms for cache misses
Performance Considerations
-
Batch Operations
- Use
get_many
for retrieving multiple items when possible - Group related cache operations to reduce network overhead
- Use
-
Serialization
- Use efficient serialization formats (e.g., bincode for binary data)
- Consider compression for large objects
-
Promotion Strategy
- Enable promotion for frequently accessed items
- Disable promotion for large items that would consume significant memory
Examples
Basic Usage
#![allow(unused)] fn main() { // Create a two-tier cache let cache_config = CacheConfig { redis_url: "redis://localhost:6379".to_string(), redis_pool_size: 10, redis_timeout_ms: 100, redis_namespace: "app:".to_string(), moka_max_capacity: 10_000, moka_time_to_live_ms: 300_000, // 5 minutes moka_time_to_idle_ms: 600_000, // 10 minutes redis_ttl_seconds: 3600, // 1 hour cache_promotion: true, }; let cache = create_two_tier_cache(&cache_config, None).await?; // Get typed cache for a specific type let user_cache = cache.get_typed_cache::<User>(); // Store a user let user = User { id: "user1".to_string(), name: "Alice".to_string() }; user_cache.set("user:user1", user, None).await?; // Retrieve the user let retrieved_user = user_cache.get("user:user1").await?; }
Error Handling
#![allow(unused)] fn main() { match user_cache.get("user:unknown").await { Ok(user) => { // Use the user }, Err(err) if err.is_not_found() => { // Handle cache miss let user = fetch_user_from_database("unknown").await?; user_cache.set("user:unknown", user.clone(), None).await?; // Use the user }, Err(err) => { // Handle other errors log::error!("Cache error: {}", err); // Fallback strategy } } }
Custom Cache Configuration
#![allow(unused)] fn main() { // Create a two-tier cache with custom TTLs let fast_ttl = Duration::from_secs(60); // 1 minute let slow_ttl = Duration::from_secs(3600); // 1 hour let moka_cache = create_moka_cache(&cache_config, None); let redis_cache = create_redis_cache(&cache_config, None).await?; let custom_cache = TwoTierCache::new( moka_cache, redis_cache, true, // promote_on_get Some(fast_ttl), Some(slow_ttl), ); }
Related Documentation
- Caching Strategies Guide - Advanced caching concepts and strategies
- Cache Configuration - Configuration options for the caching system
- Caching Patterns - Common caching patterns and best practices
- Two-Tier Cache Example - Example implementation and usage
title: "Pet Database API Reference" description: "Reference documentation for the Pet Database API, including CRUD operations and architecture details" category: reference tags:
- api
- database
- pets
- repository related:
- database-api.md
- ../patterns/repository-pattern.md
- ../../02_examples/database-service-example.md last_updated: April 9, 2025 version: 1.0
Pet Database API
Overview
The Pet Database API provides a complete set of CRUD (Create, Read, Update, Delete) operations for managing pet records in the database. This API is built following clean architecture principles, with proper separation between database abstractions in the core layer and pet-specific implementations in the application layer.
This reference document details all endpoints, data structures, request/response formats, and integration patterns for working with pet data in Navius applications.
Endpoints
Get All Pets
Retrieves a list of all pets in the database.
URL: /petdb
Method: GET
Query Parameters:
limit
(optional): Maximum number of records to return (default: 100)offset
(optional): Number of records to skip (default: 0)species
(optional): Filter by pet speciessort
(optional): Sort field (e.g.,name
,age
,created_at
)order
(optional): Sort order (asc
ordesc
, default:asc
)
Authentication: Public
Response:
{
"data": [
{
"id": "uuid-string",
"name": "Pet Name",
"species": "Pet Species",
"age": 5,
"created_at": "2024-06-01T12:00:00.000Z",
"updated_at": "2024-06-01T12:00:00.000Z"
},
...
],
"pagination": {
"total": 150,
"limit": 100,
"offset": 0,
"next_offset": 100
}
}
Status Codes:
200 OK
: Successfully retrieved the list of pets400 Bad Request
: Invalid query parameters500 Internal Server Error
: Server encountered an error
Curl Example:
# Get all pets
curl -X GET http://localhost:3000/petdb
# Get pets with pagination and filtering
curl -X GET "http://localhost:3000/petdb?limit=10&offset=20&species=dog&sort=age&order=desc"
Code Example:
#![allow(unused)] fn main() { // Client-side request to get all pets async fn get_all_pets(client: &Client, limit: Option<u32>, species: Option<&str>) -> Result<PetListResponse> { let mut req = client.get("http://localhost:3000/petdb"); if let Some(limit) = limit { req = req.query(&[("limit", limit.to_string())]); } if let Some(species) = species { req = req.query(&[("species", species)]); } let response = req.send().await?; if response.status().is_success() { Ok(response.json::<PetListResponse>().await?) } else { Err(format!("Failed to get pets: {}", response.status()).into()) } } }
Get Pet by ID
Retrieves a specific pet by its unique identifier.
URL: /petdb/:id
Method: GET
URL Parameters:
id
: UUID of the pet to retrieve
Authentication: Public
Response:
{
"id": "uuid-string",
"name": "Pet Name",
"species": "Pet Species",
"age": 5,
"created_at": "2024-06-01T12:00:00.000Z",
"updated_at": "2024-06-01T12:00:00.000Z"
}
Status Codes:
200 OK
: Successfully retrieved the pet400 Bad Request
: Invalid UUID format404 Not Found
: Pet with the given ID was not found500 Internal Server Error
: Server encountered an error
Curl Example:
curl -X GET http://localhost:3000/petdb/550e8400-e29b-41d4-a716-446655440000
Code Example:
#![allow(unused)] fn main() { // Client-side request to get a pet by ID async fn get_pet_by_id(client: &Client, id: &str) -> Result<Pet> { let response = client .get(&format!("http://localhost:3000/petdb/{}", id)) .send() .await?; match response.status() { StatusCode::OK => Ok(response.json::<Pet>().await?), StatusCode::NOT_FOUND => Err("Pet not found".into()), _ => Err(format!("Failed to get pet: {}", response.status()).into()) } } }
Create Pet
Creates a new pet in the database.
URL: /petdb
Method: POST
Authentication: Required
Request Headers:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Content-Type: application/json
Request Body:
{
"name": "Pet Name",
"species": "Pet Species",
"age": 5
}
Validation Rules:
name
: Required, cannot be empty, maximum 50 charactersspecies
: Required, cannot be emptyage
: Required, must be non-negative, must be realistic (0-100)
Response:
{
"id": "uuid-string",
"name": "Pet Name",
"species": "Pet Species",
"age": 5,
"created_at": "2024-06-01T12:00:00.000Z",
"updated_at": "2024-06-01T12:00:00.000Z"
}
Status Codes:
201 Created
: Successfully created the pet400 Bad Request
: Validation error in the request data401 Unauthorized
: Authentication required500 Internal Server Error
: Server encountered an error
Curl Example:
curl -X POST http://localhost:3000/petdb \
-H "Content-Type: application/json" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
-d '{"name":"Fluffy","species":"cat","age":3}'
Code Example:
#![allow(unused)] fn main() { // Client-side request to create a pet async fn create_pet(client: &Client, token: &str, pet: &CreatePetRequest) -> Result<Pet> { let response = client .post("http://localhost:3000/petdb") .header("Authorization", format!("Bearer {}", token)) .json(pet) .send() .await?; match response.status() { StatusCode::CREATED => Ok(response.json::<Pet>().await?), StatusCode::BAD_REQUEST => { let error = response.json::<ErrorResponse>().await?; Err(format!("Validation error: {}", error.message).into()) }, _ => Err(format!("Failed to create pet: {}", response.status()).into()) } } }
Update Pet
Updates an existing pet in the database.
URL: /petdb/:id
Method: PUT
URL Parameters:
id
: UUID of the pet to update
Authentication: Required
Request Headers:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Content-Type: application/json
Request Body:
{
"name": "Updated Name", // Optional
"species": "Updated Species", // Optional
"age": 6 // Optional
}
Validation Rules:
name
(if provided): Cannot be empty, maximum 50 charactersspecies
(if provided): Cannot be emptyage
(if provided): Must be non-negative, must be realistic (0-100)
Response:
{
"id": "uuid-string",
"name": "Updated Name",
"species": "Updated Species",
"age": 6,
"created_at": "2024-06-01T12:00:00.000Z",
"updated_at": "2024-06-01T13:00:00.000Z"
}
Status Codes:
200 OK
: Successfully updated the pet400 Bad Request
: Invalid UUID format or validation error401 Unauthorized
: Authentication required404 Not Found
: Pet with the given ID was not found500 Internal Server Error
: Server encountered an error
Curl Example:
curl -X PUT http://localhost:3000/petdb/550e8400-e29b-41d4-a716-446655440000 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
-d '{"name":"Fluffy Jr.","age":4}'
Code Example:
#![allow(unused)] fn main() { // Client-side request to update a pet async fn update_pet(client: &Client, token: &str, id: &str, update: &UpdatePetRequest) -> Result<Pet> { let response = client .put(&format!("http://localhost:3000/petdb/{}", id)) .header("Authorization", format!("Bearer {}", token)) .json(update) .send() .await?; match response.status() { StatusCode::OK => Ok(response.json::<Pet>().await?), StatusCode::NOT_FOUND => Err("Pet not found".into()), StatusCode::BAD_REQUEST => { let error = response.json::<ErrorResponse>().await?; Err(format!("Validation error: {}", error.message).into()) }, _ => Err(format!("Failed to update pet: {}", response.status()).into()) } } }
Delete Pet
Deletes a pet from the database.
URL: /petdb/:id
Method: DELETE
URL Parameters:
id
: UUID of the pet to delete
Authentication: Required
Request Headers:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Response: No content
Status Codes:
204 No Content
: Successfully deleted the pet400 Bad Request
: Invalid UUID format401 Unauthorized
: Authentication required404 Not Found
: Pet with the given ID was not found500 Internal Server Error
: Server encountered an error
Curl Example:
curl -X DELETE http://localhost:3000/petdb/550e8400-e29b-41d4-a716-446655440000 \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
Code Example:
#![allow(unused)] fn main() { // Client-side request to delete a pet async fn delete_pet(client: &Client, token: &str, id: &str) -> Result<()> { let response = client .delete(&format!("http://localhost:3000/petdb/{}", id)) .header("Authorization", format!("Bearer {}", token)) .send() .await?; match response.status() { StatusCode::NO_CONTENT => Ok(()), StatusCode::NOT_FOUND => Err("Pet not found".into()), _ => Err(format!("Failed to delete pet: {}", response.status()).into()) } } }
Data Models
Pet
#![allow(unused)] fn main() { /// Represents a pet entity in the database #[derive(Debug, Serialize, Deserialize)] pub struct Pet { /// Unique identifier pub id: String, /// Pet name pub name: String, /// Pet species pub species: String, /// Pet age in years pub age: u32, /// Creation timestamp pub created_at: DateTime<Utc>, /// Last update timestamp pub updated_at: DateTime<Utc>, } }
CreatePetRequest
#![allow(unused)] fn main() { /// Request to create a new pet #[derive(Debug, Serialize, Deserialize, Validate)] pub struct CreatePetRequest { /// Pet name #[validate(required, length(min = 1, max = 50))] pub name: String, /// Pet species #[validate(required, length(min = 1))] pub species: String, /// Pet age in years #[validate(range(min = 0, max = 100))] pub age: u32, } }
UpdatePetRequest
#![allow(unused)] fn main() { /// Request to update an existing pet #[derive(Debug, Serialize, Deserialize, Validate)] pub struct UpdatePetRequest { /// Optional updated pet name #[validate(length(min = 1, max = 50))] pub name: Option<String>, /// Optional updated pet species #[validate(length(min = 1))] pub species: Option<String>, /// Optional updated pet age #[validate(range(min = 0, max = 100))] pub age: Option<u32>, } }
PetListResponse
#![allow(unused)] fn main() { /// Response containing a list of pets with pagination #[derive(Debug, Serialize, Deserialize)] pub struct PetListResponse { /// List of pet records pub data: Vec<Pet>, /// Pagination information pub pagination: Pagination, } /// Pagination metadata #[derive(Debug, Serialize, Deserialize)] pub struct Pagination { /// Total number of records pub total: u64, /// Number of records in current page pub limit: u32, /// Starting offset pub offset: u32, /// Next page offset (null if last page) pub next_offset: Option<u32>, } }
Architecture
The Pet API follows a clean architecture approach with the following layers:
Core Layer
- EntityRepository: Generic repository interface in
core/database/repository.rs
that defines standard CRUD operations - Database Utilities: Common database functions in
core/database/utils.rs
for transaction management and error handling
#![allow(unused)] fn main() { /// Generic repository interface for database entities pub trait EntityRepository<T, ID, C, U> { /// Create a new entity async fn create(&self, data: &C) -> Result<T>; /// Find an entity by ID async fn find_by_id(&self, id: &ID) -> Result<Option<T>>; /// Find all entities matching criteria async fn find_all(&self, criteria: &HashMap<String, Value>) -> Result<Vec<T>>; /// Update an entity async fn update(&self, id: &ID, data: &U) -> Result<T>; /// Delete an entity async fn delete(&self, id: &ID) -> Result<bool>; } }
Application Layer
- PetRepository: Implementation of the
EntityRepository
for Pet entities inapp/database/repositories/pet_repository.rs
- PetService: Business logic and validation in
app/services/pet_service.rs
- API Endpoints: HTTP handlers in
app/api/pet_db.rs
that expose the functionality via REST API
#![allow(unused)] fn main() { /// Implementation of the Pet repository pub struct PetRepository { pool: Pool<Postgres>, } impl PetRepository { /// Create a new PetRepository pub fn new(pool: Pool<Postgres>) -> Self { Self { pool } } } impl EntityRepository<Pet, String, CreatePetRequest, UpdatePetRequest> for PetRepository { async fn create(&self, data: &CreatePetRequest) -> Result<Pet> { // Implementation... } async fn find_by_id(&self, id: &String) -> Result<Option<Pet>> { // Implementation... } // Other implementations... } }
#![allow(unused)] fn main() { /// Pet service handling business logic pub struct PetService { repository: Arc<dyn EntityRepository<Pet, String, CreatePetRequest, UpdatePetRequest>>, } impl PetService { pub fn new(repository: Arc<dyn EntityRepository<Pet, String, CreatePetRequest, UpdatePetRequest>>) -> Self { Self { repository } } /// Create a new pet with validation pub async fn create_pet(&self, data: CreatePetRequest) -> Result<Pet> { // Validate data data.validate()?; // Create pet in database self.repository.create(&data).await } // Other methods... } }
This separation allows for clear responsibilities:
- Core Layer: Generic interfaces and abstractions
- App Layer: Pet-specific implementations
Implementation
HTTP Handlers
#![allow(unused)] fn main() { /// Pet database API endpoints pub fn pet_routes() -> Router { Router::new() .route("/", get(get_all_pets).post(create_pet)) .route("/:id", get(get_pet_by_id).put(update_pet).delete(delete_pet)) } /// Handler for GET /petdb async fn get_all_pets( State(state): State<AppState>, Query(params): Query<GetAllPetsParams>, ) -> Result<Json<PetListResponse>, AppError> { let pet_service = &state.pet_service; let criteria = HashMap::new(); // Convert query params to criteria... let pets = pet_service.find_all_pets(criteria, params.limit, params.offset).await?; Ok(Json(pets)) } /// Handler for POST /petdb async fn create_pet( State(state): State<AppState>, auth: AuthExtractor, Json(data): Json<CreatePetRequest>, ) -> Result<(StatusCode, Json<Pet>), AppError> { // Verify permissions if !auth.has_permission("create:pets") { return Err(AppError::forbidden("Insufficient permissions")); } let pet_service = &state.pet_service; let pet = pet_service.create_pet(data).await?; Ok((StatusCode::CREATED, Json(pet))) } // Other handlers... }
Integration with Router
#![allow(unused)] fn main() { // In app/api/router.rs pub fn api_routes() -> Router { Router::new() // Other routes... .nest("/petdb", pet_routes()) // Other routes... } }
Error Handling
The API follows a consistent error handling approach:
- 400 Bad Request: Input validation errors (invalid data, format issues)
- 401 Unauthorized: Authentication issues
- 404 Not Found: Resource not found
- 500 Internal Server Error: Database or server-side errors
All error responses include:
- HTTP status code
- Error message
- Error type
Example error response:
{
"code": 400,
"message": "Pet name cannot be empty",
"error_type": "validation_error",
"details": [
{
"field": "name",
"error": "Cannot be empty"
}
]
}
Error Types
#![allow(unused)] fn main() { /// Application error types pub enum AppErrorType { /// Invalid input data ValidationError, /// Resource not found NotFound, /// Authentication error Unauthorized, /// Permission error Forbidden, /// Database error DatabaseError, /// Server error InternalError, } /// Application error structure pub struct AppError { /// HTTP status code pub code: StatusCode, /// Error message pub message: String, /// Error type pub error_type: AppErrorType, /// Additional error details pub details: Option<Vec<ErrorDetail>>, } /// Detailed error information pub struct ErrorDetail { /// Field name (for validation errors) pub field: String, /// Error description pub error: String, } }
Error Conversion
#![allow(unused)] fn main() { // Example of converting validation errors impl From<ValidationError> for AppError { fn from(error: ValidationError) -> Self { let details = error.field_errors().iter() .map(|(field, errors)| { ErrorDetail { field: field.to_string(), error: errors[0].message.clone().unwrap_or_default(), } }) .collect(); AppError { code: StatusCode::BAD_REQUEST, message: "Validation failed".to_string(), error_type: AppErrorType::ValidationError, details: Some(details), } } } }
Testing
The API includes comprehensive tests:
- Unit Tests: For core database abstractions and PetService business logic
- API Endpoint Tests: Testing the HTTP layer and response handling
Example Unit Test
#![allow(unused)] fn main() { #[tokio::test] async fn test_pet_service_create() { // Setup mock repository let mock_repo = MockPetRepository::new(); mock_repo.expect_create() .with(predicate::function(|req: &CreatePetRequest| { req.name == "Fluffy" && req.species == "cat" && req.age == 3 })) .returning(|_| { Ok(Pet { id: "test-uuid".to_string(), name: "Fluffy".to_string(), species: "cat".to_string(), age: 3, created_at: Utc::now(), updated_at: Utc::now(), }) }); let service = PetService::new(Arc::new(mock_repo)); // Call service method let create_req = CreatePetRequest { name: "Fluffy".to_string(), species: "cat".to_string(), age: 3, }; let result = service.create_pet(create_req).await; // Verify result assert!(result.is_ok()); let pet = result.unwrap(); assert_eq!(pet.name, "Fluffy"); assert_eq!(pet.species, "cat"); assert_eq!(pet.age, 3); } }
Example API Test
#![allow(unused)] fn main() { #[tokio::test] async fn test_create_pet_endpoint() { // Setup test app with mocked dependencies let app = test_app().await; // Test valid request let response = app .client .post("/petdb") .header("Authorization", "Bearer test-token") .json(&json!({ "name": "Fluffy", "species": "cat", "age": 3 })) .send() .await; assert_eq!(response.status(), StatusCode::CREATED); let pet: Pet = response.json().await.unwrap(); assert_eq!(pet.name, "Fluffy"); // Test invalid request let response = app .client .post("/petdb") .header("Authorization", "Bearer test-token") .json(&json!({ "name": "", // Empty name (invalid) "species": "cat", "age": 3 })) .send() .await; assert_eq!(response.status(), StatusCode::BAD_REQUEST); } }
Related Resources
- Repository Pattern Guide
- Database API Reference
- API Resource Implementation
- Database Service Example
title: "Architecture Documentation" description: "Documentation about Architecture Documentation" category: architecture tags:
- architecture
- documentation last_updated: March 27, 2025 version: 1.0
Architecture Documentation
This directory contains documentation about the architecture of the Navius application.
Contents
- Spring Boot Migration - Information about migrating from Spring Boot
Purpose
These documents describe the high-level architecture of the Navius application, including design decisions, architectural patterns, and system components. They provide insight into why the system is structured the way it is and how the different parts interact.
title: Navius Architectural Principles description: Core architectural principles and patterns guiding the design of the Navius framework category: reference tags:
- architecture
- design
- principles
- patterns related:
- directory-organization.md
- ../../guides/development/project-navigation.md
- ../standards/naming-conventions.md last_updated: March 27, 2025 version: 1.0
Navius Architectural Principles
Overview
This reference document outlines the core architectural principles and design patterns that guide the development of the Navius framework. These principles ensure the framework remains maintainable, extensible, and performant as it evolves.
Core Principles
1. Modular Design
Navius is built around a modular architecture that separates concerns and enables components to evolve independently.
Key aspects:
- Self-contained modules with clearly defined responsibilities
- Minimal dependencies between modules
- Ability to replace or upgrade individual components without affecting others
- Configuration-driven composition of modules
2. Explicit Over Implicit
Navius favors explicit, clear code over "magic" behavior or hidden conventions.
Key aspects:
- Explicit type declarations and function signatures
- Clear error handling paths
- Minimal use of macros except for well-defined, documented purposes
- No "convention over configuration" that hides important behavior
3. Compile-Time Safety
Navius leverages Rust's type system to catch errors at compile time rather than runtime.
Key aspects:
- Strong typing for all API interfaces
- Use of enums for representing states and variants
- Avoiding dynamic typing except when necessary for interoperability
- Proper error type design for comprehensive error handling
4. Performance First
Performance is a primary design goal, not an afterthought.
Key aspects:
- Minimal runtime overhead
- Efficient memory usage
- Asynchronous by default
- Careful consideration of allocations and copying
- Benchmarking as part of the development process
5. Developer Experience
The framework prioritizes developer experience and productivity.
Key aspects:
- Intuitive API design
- Comprehensive documentation
- Helpful error messages
- Testing utilities and patterns
- Minimal boilerplate code
Architectural Patterns
Clean Architecture
Navius follows a modified Clean Architecture pattern with distinct layers:
βββββββββββββββββββ
β Controllers β
β (HTTP Layer) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Services β
β (Business Logic)β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Repositories β
β (Data Access) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Data Store β
β (DB, Cache, etc)β
βββββββββββββββββββ
Principles applied:
- Dependencies point inward
- Inner layers know nothing about outer layers
- Domain models are independent of persistence models
- Business logic is isolated from I/O concerns
Dependency Injection
Navius uses a trait-based dependency injection pattern to enable testability and flexibility:
#![allow(unused)] fn main() { // Define a service that depends on a repository trait pub struct UserService<R: UserRepository> { repository: R, } impl<R: UserRepository> UserService<R> { pub fn new(repository: R) -> Self { Self { repository } } pub async fn get_user(&self, id: Uuid) -> Result<User, Error> { self.repository.find_by_id(id).await } } // In production code let db_repository = PostgresUserRepository::new(db_pool); let service = UserService::new(db_repository); // In test code let mock_repository = MockUserRepository::new(); let service = UserService::new(mock_repository); }
Error Handling
Navius uses a centralized error handling approach:
#![allow(unused)] fn main() { // Core error type pub enum AppError { NotFound, Unauthorized, BadRequest(String), Validation(Vec<ValidationError>), Internal(anyhow::Error), } // Converting from domain errors impl From<DatabaseError> for AppError { fn from(error: DatabaseError) -> Self { match error { DatabaseError::NotFound => AppError::NotFound, DatabaseError::ConnectionFailed(e) => AppError::Internal(e.into()), // Other conversions... } } } // Converting to HTTP responses impl IntoResponse for AppError { fn into_response(self) -> Response { let (status, error_message) = match self { AppError::NotFound => (StatusCode::NOT_FOUND, "Resource not found"), AppError::Unauthorized => (StatusCode::UNAUTHORIZED, "Unauthorized"), // Other mappings... }; // Create response (status, Json(ErrorResponse { message: error_message.to_string() })).into_response() } } }
Middleware Pipeline
Navius uses a middleware-based pipeline for processing HTTP requests:
#![allow(unused)] fn main() { let app = Router::new() .route("/api/users", get(list_users).post(create_user)) .layer(TracingLayer::new_for_http()) .layer(CorsLayer::permissive()) .layer(AuthenticationLayer::new()) .layer(CompressionLayer::new()) .layer(TimeoutLayer::new(Duration::from_secs(30))); }
Configuration Management
Navius uses a layered configuration system:
- Default values
- Configuration files
- Environment variables
- Command-line arguments
This ensures flexibility while maintaining sensible defaults:
#![allow(unused)] fn main() { // Configuration loading order let config = ConfigBuilder::new() .add_defaults() .add_file("config/default.toml") .add_file(format!("config/{}.toml", environment)) .add_environment_variables() .add_command_line_args() .build()?; }
API Design Principles
Resource-Oriented
APIs are structured around resources and their representations.
Consistent Error Handling
A standardized error response format is used across all API endpoints.
Proper HTTP Method Usage
HTTP methods match their semantic meaning (GET, POST, PUT, DELETE, etc.).
Versioning Support
APIs support versioning to maintain backward compatibility.
Database Access Principles
Repository Pattern
Data access is encapsulated behind repository interfaces.
Transaction Management
Explicit transaction boundaries with proper error handling.
Migration-Based Schema Evolution
Database schemas evolve through explicit migrations.
Testing Principles
Test Pyramid
Balance between unit, integration, and end-to-end tests.
Test Isolation
Tests should not depend on each other or external state.
Mocking External Dependencies
External dependencies are mocked for deterministic testing.
Related Documents
- Directory Organization - Detailed directory structure
- Project Navigation - Navigating the project
- Naming Conventions - Naming conventions reference
title: "Navius Project Structure" description: "Documentation about Navius Project Structure" category: architecture tags:
- api
- architecture
- authentication
- aws
- caching
- database
- development
- documentation
- integration
- redis
- testing last_updated: March 27, 2025 version: 1.0
Navius Project Structure
Updated At: March 23, 2025
This document provides a comprehensive guide to the Navius project structure, helping developers understand how the codebase is organized and how different components work together.
Directory Structure Overview
navius/
βββ .devtools/ # Development tools and configurations
β βββ coverage/ # Test coverage tools and reports
β βββ github/ # GitHub-specific configurations
β βββ gitlab/ # GitLab-specific configurations (excluding CI)
β βββ ide/ # IDE configurations (VS Code, IntelliJ, etc.)
β βββ scripts/ # Development and build scripts
βββ config/ # Application configuration files
β βββ default.yaml # Default configuration
β βββ development.yaml # Development environment configuration
β βββ production.yaml # Production environment configuration
β βββ api_registry.json # API generation registry
β βββ swagger/ # API definitions in OpenAPI format
βββ docs/ # Project documentation
β βββ architecture/ # Architecture documentation
β βββ contributing/ # Contribution guidelines
β βββ guides/ # User and developer guides
β βββ reference/ # API and technical reference
β βββ roadmaps/ # Development roadmaps
βββ migrations/ # Database migration files
βββ src/ # Source code
β βββ app/ # User-extensible application code
β β βββ api/ # User-defined API endpoints
β β βββ services/ # User-defined services
β β βββ router.rs # User-defined routes
β βββ cache/ # Cache implementations (wrappers)
β βββ config/ # Configuration implementations (wrappers)
β βββ core/ # Core business logic and implementations
β β βββ api/ # API implementations
β β βββ auth/ # Authentication functionality
β β βββ cache/ # Cache management
β β βββ config/ # Configuration management
β β βββ database/ # Database access
β β βββ error/ # Error handling
β β βββ metrics/ # Metrics collection
β β βββ reliability/ # Circuit breakers, timeouts, retries
β β βββ repository/ # Data repositories
β β βββ router/ # Routing definitions
β β βββ services/ # Business services
β β βββ utils/ # Utility functions
β βββ generated_apis.rs # Bridge to generated API code
βββ target/ # Build artifacts
β βββ generated/ # Generated API code
βββ tests/ # Test suite
β βββ integration/ # Integration tests
β βββ common/ # Common test utilities
βββ .env # Environment variables (for development)
βββ .gitlab-ci.yml # GitLab CI/CD configuration
βββ build.rs # Build script
βββ Cargo.toml # Rust dependencies and project configuration
βββ README.md # Project overview
Version Control Strategy
Navius uses a dual VCS approach:
- GitLab (Primary): Business operations, CI/CD, issue tracking, code review workflow
- GitHub (Secondary): Public visibility, community engagement, documentation accessibility
Repository Sync Strategy
Repositories are synchronized using GitLab's mirroring feature:
- Automatic one-way sync from GitLab β GitHub after successful builds
- Production code and releases are pushed to GitHub only after validation
Core Components and Their Responsibilities
1. Core Module Structure (src/core/
)
The core module contains the central business logic and implementations:
- api: Contains the API endpoints and handlers
- auth: Authentication and authorization functionality
- cache: Cache management and provider integration
- config: Configuration management and parsing
- database: Database connections and query execution
- error: Error types and handling utilities
- metrics: Metrics collection and reporting
- reliability: Circuit breakers, rate limiting, and retry logic
- repository: Data access layer
- router: API route definitions
- services: Business service implementations
- utils: Shared utility functions
2. User-Facing Components (src/app/
)
User-extensible scaffolding that allows developers to extend the application:
- router.rs: User-defined routes and endpoints
- api/: User-defined API endpoints
- services/: User-defined service implementations
3. Generated Code (target/generated/
)
Auto-generated API clients and models:
- [api_name]_api/: Generated API client code for each API
- openapi/: OpenAPI schemas and configurations
Module Organization and Dependencies
Navius follows a modular architecture with clean separation of concerns:
- HTTP Layer (API): Defines REST endpoints, handles HTTP requests/responses
- Business Logic (Services): Implements core application functionality
- Data Access (Repository): Manages data persistence and retrieval
- Domain Model (Models): Defines data structures used across the application
- Infrastructure (Core): Provides framework capabilities like auth, caching, etc.
Dependencies Between Modules
The dependencies between modules follow a clean architecture approach:
API β Services β Repository β Database
β
Models
β
Core
Specific component dependencies:
- api β depends on β services, repository, error
- services β depends on β repository, error
- repository β depends on β database, error
- router β depends on β api, auth
- auth β depends on β error, config
Major Dependencies and Integrations
- Axum: Web framework
- Tokio: Asynchronous runtime
- SQLx: Database access
- Redis: Cache provider
- AWS: Cloud services integration
- Microsoft Entra: Authentication platform
Key Design Patterns
- Clean Architecture: Separation of concerns with core business logic isolated
- Repository Pattern: Data access abstraction
- Dependency Injection: Through function parameters and context
- Circuit Breaker Pattern: For resilient external service calls
- Middleware Pattern: For cross-cutting concerns
Route Groups
/
- Public routes, no authentication required/read
- Read-only authenticated routes/full
- Full access authenticated routes/actuator
- System monitoring and health checks
Common Development Workflows
1. Adding a New API Endpoint
- Define the route in
src/app/router.rs
- Implement the handler in
src/app/api/
- Add any needed services in
src/app/services/
- Add tests in
tests/integration/
2. Working with Generated API Clients
- Update API definitions in
config/swagger/
- Run
.devtools/scripts/regenerate_api.sh
orcargo build
(automatic generation) - Import the generated code through
src/generated_apis.rs
3. Updating Configuration
- Modify the appropriate YAML file in
config/
- Access the configuration through the
config::get_config()
function
Testing Structure
Each component type has its own testing approach:
- Services: Unit tests focus on business logic
- Repositories: Integration tests focus on data access
- API Endpoints: End-to-end tests focus on HTTP interactions
- Models: Property-based tests focus on invariants
Navigation Tools
To help with navigating the codebase, several tools are available:
-
Documentation:
- See
docs/guides/project-navigation.md
for detailed navigation guidance - Check
docs/architecture/module-dependencies.md
for visualizations of module dependencies
- See
-
Helper Scripts:
- Use
.devtools/scripts/navigate.sh
to find code components - Use
.devtools/scripts/verify-structure.sh
to validate project structure
- Use
-
IDE Configuration:
- VS Code configuration is available in
.devtools/ide/vscode/
- Use the provided launch configurations for debugging
- VS Code configuration is available in
Further Resources
- Developer Onboarding: See
docs/contributing/onboarding.md
- IDE Setup: See
docs/contributing/ide-setup.md
- Module Dependencies: See
docs/architecture/module-dependencies.md
for visualizations
Related Documents
- Module Dependencies - Dependencies between modules
Component Architecture
Design Principles
This document outlines the core architectural principles and design patterns that guide the development of the Navius framework. These principles ensure the framework remains maintainable, extensible, and performant as it evolves.
Core Principles
1. Modular Design
Navius is built around a modular architecture that separates concerns and enables components to evolve independently.
Key aspects:
- Self-contained modules with clearly defined responsibilities
- Minimal dependencies between modules
- Ability to replace or upgrade individual components without affecting others
- Configuration-driven composition of modules
2. Explicit Over Implicit
Navius favors explicit, clear code over "magic" behavior or hidden conventions.
Key aspects:
- Explicit type declarations and function signatures
- Clear error handling paths
- Minimal use of macros except for well-defined, documented purposes
- No "convention over configuration" that hides important behavior
3. Compile-Time Safety
Navius leverages Rust's type system to catch errors at compile time rather than runtime.
Key aspects:
- Strong typing for all API interfaces
- Use of enums for representing states and variants
- Avoiding dynamic typing except when necessary for interoperability
- Proper error type design for comprehensive error handling
4. Performance First
Performance is a primary design goal, not an afterthought.
Key aspects:
- Minimal runtime overhead
- Efficient memory usage
- Asynchronous by default
- Careful consideration of allocations and copying
- Benchmarking as part of the development process
5. Developer Experience
The framework prioritizes developer experience and productivity.
Key aspects:
- Intuitive API design
- Comprehensive documentation
- Helpful error messages
- Testing utilities and patterns
- Minimal boilerplate code
Architectural Patterns
Clean Architecture
Navius follows a modified Clean Architecture pattern with distinct layers:
βββββββββββββββββββ
β Controllers β
β (HTTP Layer) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Services β
β (Business Logic)β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Repositories β
β (Data Access) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Data Store β
β (DB, Cache, etc)β
βββββββββββββββββββ
Principles applied:
- Dependencies point inward
- Inner layers know nothing about outer layers
- Domain models are independent of persistence models
- Business logic is isolated from I/O concerns
Dependency Injection
Navius uses a trait-based dependency injection pattern to enable testability and flexibility:
#![allow(unused)] fn main() { // Define a service that depends on a repository trait pub struct UserService<R: UserRepository> { repository: R, } impl<R: UserRepository> UserService<R> { pub fn new(repository: R) -> Self { Self { repository } } pub async fn get_user(&self, id: Uuid) -> Result<User, Error> { self.repository.find_by_id(id).await } } // In production code let db_repository = PostgresUserRepository::new(db_pool); let service = UserService::new(db_repository); // In test code let mock_repository = MockUserRepository::new(); let service = UserService::new(mock_repository); }
Error Handling
Navius uses a centralized error handling approach:
#![allow(unused)] fn main() { // Core error type pub enum AppError { NotFound, Unauthorized, BadRequest(String), Validation(Vec<ValidationError>), Internal(anyhow::Error), } // Converting from domain errors impl From<DatabaseError> for AppError { fn from(error: DatabaseError) -> Self { match error { DatabaseError::NotFound => AppError::NotFound, DatabaseError::ConnectionFailed(e) => AppError::Internal(e.into()), // Other conversions... } } } // Converting to HTTP responses impl IntoResponse for AppError { fn into_response(self) -> Response { let (status, error_message) = match self { AppError::NotFound => (StatusCode::NOT_FOUND, "Resource not found"), AppError::Unauthorized => (StatusCode::UNAUTHORIZED, "Unauthorized"), // Other mappings... }; // Create response (status, Json(ErrorResponse { message: error_message.to_string() })).into_response() } } }
Middleware Pipeline
Navius uses a middleware-based pipeline for processing HTTP requests:
#![allow(unused)] fn main() { let app = Router::new() .route("/api/users", get(list_users).post(create_user)) .layer(TracingLayer::new_for_http()) .layer(CorsLayer::permissive()) .layer(AuthenticationLayer::new()) .layer(CompressionLayer::new()) .layer(TimeoutLayer::new(Duration::from_secs(30))); }
API Design Principles
Resource-Oriented
APIs are structured around resources and their representations.
Consistent Error Handling
A standardized error response format is used across all API endpoints.
Proper HTTP Method Usage
HTTP methods match their semantic meaning (GET, POST, PUT, DELETE, etc.).
Versioning Support
APIs support versioning to maintain backward compatibility.
Database Access Principles
Repository Pattern
Data access is encapsulated behind repository interfaces.
Transaction Management
Explicit transaction boundaries with proper error handling.
Migration-Based Schema Evolution
Database schemas evolve through explicit migrations.
Extension Points
This document outlines the extension points available in the Navius framework, which allow for customization and extension of the framework's behavior without modifying its core.
What Are Extension Points?
Extension points are well-defined interfaces in the Navius framework that allow applications to:
- Extend the framework with custom functionality
- Customize existing behavior
- Replace default implementations with application-specific ones
- Integrate with third-party libraries and systems
Extension points are critical for maintaining a clean separation between framework code and application-specific code.
Types of Extension Points
Navius provides several types of extension points:
- Trait-based Extensions: Implementing traits to extend functionality
- Service Registration: Registering custom services
- Provider Registration: Registering custom providers
- Middleware Extensions: Adding custom middleware
- Event Handlers: Subscribing to framework events
- Configuration Extensions: Extending configuration
Trait-based Extensions
The most common extension mechanism in Navius is implementing traits:
#![allow(unused)] fn main() { // Create a custom health check by implementing the HealthCheck trait pub struct DatabaseHealthCheck { db_pool: PgPool, } impl DatabaseHealthCheck { pub fn new(db_pool: PgPool) -> Self { Self { db_pool } } } impl HealthCheck for DatabaseHealthCheck { fn name(&self) -> &'static str { "database" } async fn check(&self) -> HealthStatus { match self.db_pool.acquire().await { Ok(_) => HealthStatus::up(), Err(e) => HealthStatus::down().with_details("Failed to connect to database", e), } } } // Register the custom health check app.register_health_check(Box::new(DatabaseHealthCheck::new(db_pool))); }
Common trait-based extension points include:
HealthCheck
: Custom health checksAuthenticationProvider
: Custom authentication mechanismsLoggingAdapter
: Custom logging integrationsCacheService
: Custom cache implementationsEventHandler
: Custom event processing
Service Registration
Custom services can be registered with the service registry:
#![allow(unused)] fn main() { // Define a custom service pub struct EmailService { config: EmailConfig, client: reqwest::Client, } impl EmailService { pub fn new(config: EmailConfig) -> Self { Self { config, client: reqwest::Client::new(), } } pub async fn send_email(&self, to: &str, subject: &str, body: &str) -> Result<(), EmailError> { // Implementation... Ok(()) } } // Register the service let mut registry = ServiceRegistry::new(); let email_service = EmailService::new(config.email.clone()); registry.register::<EmailService>(email_service); // Use the service later let email_service = registry.get::<EmailService>() .expect("Email service not registered"); email_service.send_email("[email protected]", "Hello", "World").await?; }
Provider Registration
Custom providers can be registered to create services:
#![allow(unused)] fn main() { // Define a custom cache provider pub struct CloudCacheProvider; impl CloudCacheProvider { pub fn new() -> Self { Self } } impl CacheProvider for CloudCacheProvider { fn create(&self, config: &CacheConfig) -> Result<Box<dyn CacheService>, ProviderError> { let cloud_config = config.cloud.as_ref() .ok_or_else(|| ProviderError::Configuration("Cloud cache configuration missing".into()))?; let client = CloudCacheClient::new(&cloud_config.connection_string)?; let cache_service = CloudCacheService::new(client); Ok(Box::new(cache_service)) } fn supports_type(&self, cache_type: &str) -> bool { cache_type.eq_ignore_ascii_case("cloud") } fn name(&self) -> &'static str { "cloud-cache" } } // Register the provider let mut cache_registry = ProviderRegistry::new(); cache_registry.register(Box::new(CloudCacheProvider::new())); }
Middleware Extensions
Custom middleware can be added to the HTTP pipeline:
#![allow(unused)] fn main() { // Define custom middleware pub struct RateLimitMiddleware { limiter: Arc<RateLimiter>, } impl RateLimitMiddleware { pub fn new(requests_per_minute: u64) -> Self { let limiter = Arc::new(RateLimiter::new(requests_per_minute)); Self { limiter } } } impl<S> Layer<S> for RateLimitMiddleware { type Service = RateLimitService<S>; fn layer(&self, service: S) -> Self::Service { RateLimitService { inner: service, limiter: self.limiter.clone(), } } } // Register the middleware let app = Router::new() .route("/api/users", get(list_users)) .layer(RateLimitMiddleware::new(60)); }
Event Handlers
Custom event handlers can be registered to respond to framework events:
#![allow(unused)] fn main() { // Define a custom event handler pub struct AuditEventHandler { db_pool: PgPool, } impl AuditEventHandler { pub fn new(db_pool: PgPool) -> Self { Self { db_pool } } } impl EventHandler for AuditEventHandler { async fn handle(&self, event: &Event) -> Result<(), EventError> { match event { Event::UserAuthenticated { user_id, ip_address, timestamp } => { sqlx::query!( "INSERT INTO audit_log (event_type, user_id, ip_address, timestamp) VALUES ($1, $2, $3, $4)", "user_authenticated", user_id, ip_address, timestamp ) .execute(&self.db_pool) .await?; }, // Handle other events... _ => {}, } Ok(()) } fn supports_event(&self, event_type: &str) -> bool { matches!(event_type, "user_authenticated" | "user_created" | "user_deleted") } } // Register the event handler let mut event_bus = EventBus::new(); event_bus.register_handler(Box::new(AuditEventHandler::new(db_pool))); }
Configuration Extensions
The configuration system can be extended with custom sections:
#![allow(unused)] fn main() { // Define a custom configuration section #[derive(Debug, Clone, Deserialize)] pub struct TwilioConfig { pub account_sid: String, pub auth_token: String, pub from_number: String, } // Extend the application configuration #[derive(Debug, Clone, Deserialize)] pub struct AppConfig { // Standard configuration... pub server: ServerConfig, pub database: DatabaseConfig, pub cache: CacheConfig, // Custom configuration... pub twilio: TwilioConfig, } // Use the custom configuration let config = ConfigBuilder::new() .add_file("config/default.toml") .build::<AppConfig>()?; let twilio_service = TwilioService::new(config.twilio); }
Extension Point Best Practices
Make Extension Points Explicit
Clearly document which parts of the framework are intended for extension. Use traits with well-defined methods rather than relying on inheriting from concrete classes.
Follow the Principle of Least Surprise
Extension points should behave in predictable ways. Avoid hidden behaviors or side effects that might surprise developers using the extension point.
Use Composition Over Inheritance
Favor composition patterns (like middleware) over inheritance hierarchies for extensions. This provides more flexibility and avoids many common inheritance problems.
Provide Sensible Defaults
Every extension point should have a reasonable default implementation. Users should only need to implement custom extensions when they want to change the default behavior.
Document Extension Requirements
Clearly document what is required to implement an extension point, including:
- Required methods and their semantics
- Threading and lifetime requirements
- Error handling expectations
- Performance considerations
Test Extensions Thoroughly
Provide testing utilities and examples to help users test their extensions. Extension points should be designed with testability in mind.
Core Extension Points Reference
HealthCheck Trait
#![allow(unused)] fn main() { pub trait HealthCheck: Send + Sync + 'static { fn name(&self) -> &'static str; async fn check(&self) -> HealthStatus; } }
CacheService Trait
#![allow(unused)] fn main() { pub trait CacheService: Send + Sync + 'static { async fn get(&self, key: &str) -> Result<Option<String>, CacheError>; async fn set(&self, key: &str, value: String, ttl: Duration) -> Result<(), CacheError>; async fn delete(&self, key: &str) -> Result<(), CacheError>; async fn clear(&self) -> Result<(), CacheError>; } }
DatabaseService Trait
#![allow(unused)] fn main() { pub trait DatabaseService: Send + Sync + 'static { async fn execute(&self, query: &str, params: &[Value]) -> Result<u64, DatabaseError>; async fn query_one(&self, query: &str, params: &[Value]) -> Result<Row, DatabaseError>; async fn query_all(&self, query: &str, params: &[Value]) -> Result<Vec<Row>, DatabaseError>; async fn transaction<F, R>(&self, f: F) -> Result<R, DatabaseError> where F: FnOnce(&dyn Transaction) -> Future<Output = Result<R, DatabaseError>> + Send, R: Send + 'static; } }
AuthenticationProvider Trait
#![allow(unused)] fn main() { pub trait AuthenticationProvider: Send + Sync + 'static { async fn authenticate(&self, credentials: &Credentials) -> Result<Option<User>, AuthError>; async fn validate_token(&self, token: &str) -> Result<Option<User>, AuthError>; async fn refresh_token(&self, token: &str) -> Result<Option<String>, AuthError>; } }
EventHandler Trait
#![allow(unused)] fn main() { pub trait EventHandler: Send + Sync + 'static { async fn handle(&self, event: &Event) -> Result<(), EventError>; fn supports_event(&self, event_type: &str) -> bool; } }
Related Documents
Service Architecture
This document outlines the service architecture used throughout the Navius framework, focusing on the design patterns and implementation details that enable flexible, extensible service composition.
Core Concepts
The Navius service architecture is built around several key concepts:
- Service Traits: Defined interfaces that services implement
- Service Implementations: Concrete implementations of service traits
- Service Registry: A central registry for accessing services
- Service Dependencies: Explicit declaration of service dependencies
- Service Lifecycle: Management of service initialization and cleanup
Service Organization
src/
βββ core/
β βββ services/
β β βββ traits/ # Service trait definitions
β β βββ implementations/ # Default implementations
β β βββ registry.rs # Service registry
βββ app/
βββ services/
βββ implementations/ # Application-specific implementations
Service Traits
Service traits define the interface that service implementations must provide. They are typically defined in the core/services/traits
directory:
#![allow(unused)] fn main() { // In src/core/services/traits/cache.rs pub trait CacheService: Send + Sync + 'static { async fn get(&self, key: &str) -> Result<Option<String>, CacheError>; async fn set(&self, key: &str, value: String, ttl: Duration) -> Result<(), CacheError>; async fn delete(&self, key: &str) -> Result<(), CacheError>; async fn clear(&self) -> Result<(), CacheError>; } }
Key aspects of service traits:
- They should be as minimal as possible, focusing on core functionality
- They should include appropriate bounds (Send, Sync, 'static) for async usage
- They should return Result types with specific error types
- They should be well-documented with examples
Service Implementations
Service implementations provide concrete implementations of service traits. They are typically defined in the core/services/implementations
directory for core implementations and app/services/implementations
for application-specific implementations:
#![allow(unused)] fn main() { // In src/core/services/implementations/memory_cache.rs pub struct MemoryCacheService { cache: RwLock<HashMap<String, CacheEntry>>, } impl MemoryCacheService { pub fn new() -> Self { Self { cache: RwLock::new(HashMap::new()), } } } impl CacheService for MemoryCacheService { async fn get(&self, key: &str) -> Result<Option<String>, CacheError> { let cache = self.cache.read().await; let entry = cache.get(key); match entry { Some(entry) if !entry.is_expired() => Ok(Some(entry.value.clone())), _ => Ok(None), } } // Other method implementations... } }
Key aspects of service implementations:
- They should implement the service trait fully
- They should be configurable through constructor parameters
- They should be well-tested with unit tests
- They should properly handle error conditions
- They may implement multiple service traits if appropriate
Service Registry
The service registry is a central component for accessing services. It is responsible for:
- Storing service instances
- Providing type-safe access to services
- Managing service dependencies
- Ensuring services are initialized in the correct order
#![allow(unused)] fn main() { // In src/core/services/registry.rs pub struct ServiceRegistry { services: HashMap<TypeId, Box<dyn Any + Send + Sync>>, } impl ServiceRegistry { pub fn new() -> Self { Self { services: HashMap::new(), } } pub fn register<S: Any + Send + Sync>(&mut self, service: S) { let type_id = TypeId::of::<S>(); self.services.insert(type_id, Box::new(service)); } pub fn get<S: Any + Send + Sync>(&self) -> Option<&S> { let type_id = TypeId::of::<S>(); self.services.get(&type_id).and_then(|boxed| boxed.downcast_ref::<S>()) } } }
Service Dependencies
Services often depend on other services. These dependencies should be explicitly declared and injected through constructors:
#![allow(unused)] fn main() { // In src/core/services/implementations/tiered_cache.rs pub struct TieredCacheService<P: CacheService, S: CacheService> { primary: P, secondary: S, } impl<P: CacheService, S: CacheService> TieredCacheService<P, S> { pub fn new(primary: P, secondary: S) -> Self { Self { primary, secondary } } } impl<P: CacheService, S: CacheService> CacheService for TieredCacheService<P, S> { async fn get(&self, key: &str) -> Result<Option<String>, CacheError> { // Try primary cache first match self.primary.get(key).await? { Some(value) => Ok(Some(value)), None => { // Try secondary cache match self.secondary.get(key).await? { Some(value) => { // Populate primary cache let _ = self.primary.set(key, value.clone(), Duration::from_secs(3600)).await; Ok(Some(value)) }, None => Ok(None), } } } } // Other method implementations... } }
Key aspects of service dependencies:
- Dependencies should be injected through constructors
- Generic type parameters should be used for flexibility
- Services should depend on traits, not concrete implementations
- Dependencies should be well-documented
Service Initialization
Services are typically initialized during application startup through the service registry:
#![allow(unused)] fn main() { // In src/app/startup.rs pub fn initialize_services(config: &AppConfig) -> ServiceRegistry { let mut registry = ServiceRegistry::new(); // Create and register database service let db_service = PostgresDatabaseService::new(&config.database); registry.register::<dyn DatabaseService>(Box::new(db_service)); // Create and register cache service let cache_service = RedisCacheService::new(&config.cache); registry.register::<dyn CacheService>(Box::new(cache_service)); // Create and register user service, which depends on database service let db_service = registry.get::<dyn DatabaseService>().unwrap(); let user_service = UserService::new(db_service); registry.register::<dyn UserService>(Box::new(user_service)); registry } }
Service Discovery
Services can be discovered and accessed through the service registry:
#![allow(unused)] fn main() { // In a request handler pub async fn handle_request( Path(user_id): Path<String>, State(registry): State<Arc<ServiceRegistry>>, ) -> impl IntoResponse { // Get user service from registry let user_service = match registry.get::<dyn UserService>() { Some(service) => service, None => return (StatusCode::INTERNAL_SERVER_ERROR, "User service not found").into_response(), }; // Use the service match user_service.get_user(&user_id).await { Ok(user) => (StatusCode::OK, Json(user)).into_response(), Err(_) => (StatusCode::NOT_FOUND, "User not found").into_response(), } } }
Best Practices
Keep Services Focused
Each service should have a single, well-defined responsibility. Services that try to do too much become difficult to test and maintain.
Use Dependency Injection
Services should receive their dependencies through constructors, not create them internally. This enables easier testing and flexibility.
Test Services in Isolation
Each service should be testable in isolation, without requiring its dependencies to be fully implemented. Use mocks or stubs for dependencies in tests.
Document Service Contracts
Service traits should be well-documented, including example usage, error conditions, and performance characteristics.
Consider Service Lifecycle
Services may need initialization (connecting to databases, loading caches) and cleanup (closing connections, flushing data). Ensure these are properly handled.
Error Handling
Services should use specific error types that provide meaningful information about what went wrong. Generic error types make debugging difficult.
Related Documents
Configuration
title: Navius Environment Variables Reference description: Comprehensive reference of environment variables for configuring Navius applications category: reference tags:
- configuration
- environment-variables
- settings related:
- ../architecture/principles.md
- ../../guides/deployment/production-deployment.md
- ../../guides/deployment/cloud-deployment.md last_updated: March 27, 2025 version: 1.0
Navius Environment Variables Reference
Overview
This reference document provides a comprehensive list of all environment variables supported by the Navius framework. Environment variables are used to configure various aspects of the application without changing code or configuration files, making them ideal for deployment across different environments.
Core Settings
Variable | Description | Default | Example |
---|---|---|---|
RUN_ENV | Application environment | development | production |
PORT | HTTP server port | 3000 | 8080 |
HOST | HTTP server host | 127.0.0.1 | 0.0.0.0 |
LOG_LEVEL | Logging verbosity level | info | debug |
LOG_FORMAT | Log output format | text | json |
CONFIG_PATH | Path to configuration directory | ./config | /etc/navius/config |
RUST_BACKTRACE | Enable backtrace on errors | 0 | 1 |
RUST_LOG | Detailed logging configuration | - | navius=debug,warn |
Database Configuration
Variable | Description | Default | Example |
---|---|---|---|
DATABASE_URL | Database connection string | - | postgres://user:pass@localhost/dbname |
DATABASE_POOL_SIZE | Max database connections | 5 | 20 |
DATABASE_TIMEOUT_SECONDS | Query timeout in seconds | 30 | 10 |
DATABASE_CONNECT_TIMEOUT_SECONDS | Connection timeout in seconds | 5 | 3 |
DATABASE_IDLE_TIMEOUT_SECONDS | Idle connection timeout | 300 | 600 |
DATABASE_MAX_LIFETIME_SECONDS | Max connection lifetime | 1800 | 3600 |
DATABASE_SSL_MODE | SSL connection mode | prefer | require |
RUN_MIGRATIONS | Auto-run migrations on startup | false | true |
Cache Configuration
Variable | Description | Default | Example |
---|---|---|---|
REDIS_URL | Redis connection URL | - | redis://localhost:6379 |
REDIS_POOL_SIZE | Max Redis connections | 5 | 10 |
REDIS_TIMEOUT_SECONDS | Redis command timeout | 5 | 3 |
CACHE_TTL_SECONDS | Default cache TTL | 3600 | 300 |
CACHE_PREFIX | Cache key prefix | navius: | myapp:prod: |
CACHE_ENABLED | Enable caching | true | false |
Security Settings
Variable | Description | Default | Example |
---|---|---|---|
JWT_SECRET | Secret for JWT tokens | - | your-jwt-secret-key |
JWT_EXPIRATION_SECONDS | JWT token expiration | 86400 | 3600 |
CORS_ALLOWED_ORIGINS | Allowed CORS origins | * | https://example.com,https://app.example.com |
CORS_ALLOWED_METHODS | Allowed CORS methods | GET,POST,PUT,DELETE | GET,POST |
CORS_ALLOWED_HEADERS | Allowed CORS headers | Content-Type,Authorization | X-API-Key,Authorization |
CORS_MAX_AGE_SECONDS | CORS preflight cache time | 86400 | 3600 |
API_KEY | Global API key for auth | - | your-api-key |
TLS_CERT_PATH | Path to TLS certificate | - | /etc/certs/server.crt |
TLS_KEY_PATH | Path to TLS private key | - | /etc/certs/server.key |
ENABLE_TLS | Enable TLS encryption | false | true |
HTTP Server Settings
Variable | Description | Default | Example |
---|---|---|---|
REQUEST_TIMEOUT_SECONDS | HTTP request timeout | 30 | 60 |
REQUEST_BODY_LIMIT | Max request body size | 1MB | 10MB |
ENABLE_COMPRESSION | Enable response compression | true | false |
KEEP_ALIVE_SECONDS | Keep-alive connection timeout | 75 | 120 |
MAX_CONNECTIONS | Max concurrent connections | 1024 | 10000 |
WORKERS | Number of worker threads | (cores * 2) | 8 |
ENABLE_HEALTH_CHECK | Enable /health endpoint | true | false |
GRACEFUL_SHUTDOWN_SECONDS | Graceful shutdown period | 30 | 10 |
API Settings
Variable | Description | Default | Example |
---|---|---|---|
API_VERSION | Default API version | v1 | v2 |
ENABLE_DOCS | Enable API documentation | true | false |
DOCS_URL_PATH | Path to API docs | /docs | /api/docs |
RATE_LIMIT_ENABLED | Enable rate limiting | false | true |
RATE_LIMIT_REQUESTS | Max requests per window | 100 | 1000 |
RATE_LIMIT_WINDOW_SECONDS | Rate limit time window | 60 | 3600 |
API_BASE_PATH | Base path for all APIs | /api | /api/v1 |
Monitoring and Telemetry
Variable | Description | Default | Example |
---|---|---|---|
ENABLE_METRICS | Enable Prometheus metrics | true | false |
METRICS_PATH | Metrics endpoint path | /metrics | /actuator/metrics |
TRACING_ENABLED | Enable OpenTelemetry tracing | false | true |
JAEGER_ENDPOINT | Jaeger collector endpoint | - | http://jaeger:14268/api/traces |
OTLP_ENDPOINT | OTLP collector endpoint | - | http://collector:4317 |
SERVICE_NAME | Service name for telemetry | navius | user-service |
LOG_REQUEST_HEADERS | Log HTTP request headers | false | true |
HEALTH_CHECK_PATH | Health check endpoint path | /health | /actuator/health |
Integration Settings
Variable | Description | Default | Example |
---|---|---|---|
EMAIL_SMTP_HOST | SMTP server host | - | smtp.example.com |
EMAIL_SMTP_PORT | SMTP server port | 25 | 587 |
EMAIL_SMTP_USERNAME | SMTP authentication user | - | [email protected] |
EMAIL_SMTP_PASSWORD | SMTP authentication password | - | password |
EMAIL_DEFAULT_FROM | Default sender address | - | [email protected] |
AWS_ACCESS_KEY_ID | AWS access key | - | AKIAIOSFODNN7EXAMPLE |
AWS_SECRET_ACCESS_KEY | AWS secret key | - | wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
AWS_REGION | AWS region | us-east-1 | eu-west-1 |
S3_BUCKET | S3 bucket name | - | my-app-uploads |
S3_URL_EXPIRATION_SECONDS | S3 presigned URL expiration | 3600 | 300 |
Resource Limits
Variable | Description | Default | Example |
---|---|---|---|
MEMORY_LIMIT | Memory limit in MB | - | 512 |
CPU_LIMIT | CPU limit (percentage) | - | 80 |
TOKIO_WORKER_THREADS | Tokio runtime worker threads | (cores) | 8 |
BLOCKING_THREADS | Tokio blocking thread pool size | (cores * 4) | 32 |
MAX_TASK_BACKLOG | Max queued tasks | 10000 | 5000 |
Feature Flags
Variable | Description | Default | Example |
---|---|---|---|
FEATURE_ADVANCED_SEARCH | Enable advanced search | false | true |
FEATURE_FILE_UPLOADS | Enable file uploads | true | false |
FEATURE_WEBSOCKETS | Enable WebSocket support | false | true |
FEATURE_BATCH_PROCESSING | Enable batch processing | false | true |
FEATURE_NOTIFICATIONS | Enable notifications | true | false |
Using Environment Variables
Environment variables can be set in various ways:
1. In development (.env
file):
# .env
DATABASE_URL=postgres://localhost/navius_dev
LOG_LEVEL=debug
FEATURE_ADVANCED_SEARCH=true
2. In shell:
export DATABASE_URL=postgres://localhost/navius_dev
export LOG_LEVEL=debug
./run_dev.sh
3. In Docker:
docker run -e DATABASE_URL=postgres://db/navius -e LOG_LEVEL=info navius
4. In Kubernetes:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
- name: LOG_LEVEL
value: "info"
Precedence
Environment variables are loaded in this order (later sources override earlier ones):
- Default values
- Configuration files (
config/{environment}.toml
) .env
file- Environment variables
- Command line arguments
Related Documents
- Architectural Principles - Core architectural principles
- Production Deployment Guide - Deploying to production
- Cloud Deployment Guide - Cloud-specific deployment
Application Config
title: "Cache Configuration Reference" description: "Detailed reference for all configuration options available in the Navius caching system" category: reference tags:
- reference
- configuration
- caching
- redis
- settings related:
- ../../guides/caching-strategies.md
- ../../guides/features/caching.md
- ../patterns/caching-patterns.md last_updated: March 27, 2025 version: 1.0
Cache Configuration Reference
This document provides detailed information about all configuration options for the Navius caching system.
Overview
Navius provides a flexible and configurable caching system that can be customized to suit different environments and use cases. The cache configuration is defined in the application configuration files.
Configuration Structure
The cache configuration section in your application configuration looks like this:
cache:
# Global cache configuration
enabled: true
default_provider: "memory"
default_ttl_seconds: 300 # 5 minutes
# Provider-specific configurations
providers:
- name: "memory"
enabled: true
type: "moka"
capacity: 10000 # items
eviction_policy: "LRU"
ttl_seconds: 300 # 5 minutes
- name: "redis"
enabled: true
type: "redis"
connection_string: "redis://localhost:6379"
ttl_seconds: 3600 # 1 hour
connection_pool_size: 10
timeout_ms: 500
# Two-tier cache configuration
two_tier:
enabled: true
fast_cache_provider: "memory"
slow_cache_provider: "redis"
fast_cache_ttl_seconds: 60 # 1 minute
slow_cache_ttl_seconds: 3600 # 1 hour
promotion_enabled: true
Configuration Options
Global Cache Options
Option | Type | Default | Description |
---|---|---|---|
enabled | boolean | true | Enables or disables the entire caching system |
default_provider | string | "memory" | The name of the default cache provider to use |
default_ttl_seconds | integer | 300 | Default time-to-live in seconds for cache entries |
Provider Configuration
Each provider has the following configuration options:
Common Provider Options
Option | Type | Default | Description |
---|---|---|---|
name | string | (required) | Unique identifier for the provider |
enabled | boolean | true | Enables or disables this provider |
type | string | (required) | Type of provider (e.g., "moka", "redis") |
ttl_seconds | integer | Global default | Default TTL for this provider |
Memory Provider Options (type: "moka")
Option | Type | Default | Description |
---|---|---|---|
capacity | integer | 10000 | Maximum number of items in the cache |
eviction_policy | string | "LRU" | Eviction policy, one of: "LRU", "LFU", "FIFO" |
time_to_idle_seconds | integer | None | Time after which an entry is evicted if not accessed |
expire_after_access_seconds | integer | None | Time after which an entry is evicted after last access |
expire_after_write_seconds | integer | None | Time after which an entry is evicted after creation |
Redis Provider Options (type: "redis")
Option | Type | Default | Description |
---|---|---|---|
connection_string | string | "redis://localhost:6379" | Redis connection URI |
connection_pool_size | integer | 5 | Size of the connection pool |
timeout_ms | integer | 1000 | Connection timeout in milliseconds |
retry_attempts | integer | 3 | Number of retry attempts for failed operations |
retry_delay_ms | integer | 100 | Delay between retry attempts in milliseconds |
cluster_mode | boolean | false | Enable Redis cluster mode |
sentinel_mode | boolean | false | Enable Redis sentinel mode |
sentinel_master | string | "mymaster" | Name of the sentinel master |
username | string | None | Redis username (for Redis 6.0+) |
password | string | None | Redis password |
database | integer | 0 | Redis database index |
Two-Tier Cache Configuration
The two-tier cache combines two separate cache providers (typically memory and Redis) into a unified caching system.
Option | Type | Default | Description |
---|---|---|---|
enabled | boolean | true | Enables or disables the two-tier cache |
fast_cache_provider | string | "memory" | The name of the provider to use as the fast cache |
slow_cache_provider | string | "redis" | The name of the provider to use as the slow cache |
fast_cache_ttl_seconds | integer | 60 | TTL for the fast cache in seconds |
slow_cache_ttl_seconds | integer | 3600 | TTL for the slow cache in seconds |
promotion_enabled | boolean | true | Enable automatic promotion from slow to fast cache |
promotion_lock_ms | integer | 10 | Promotion lock timeout in milliseconds |
Environment Variables
You can also configure the cache using environment variables. These will override the values in the configuration files.
Environment Variable | Description |
---|---|
NAVIUS_CACHE_ENABLED | Enable/disable the entire cache (true /false ) |
NAVIUS_CACHE_DEFAULT_PROVIDER | The default cache provider name |
NAVIUS_CACHE_DEFAULT_TTL | Default TTL in seconds |
NAVIUS_CACHE_MEMORY_ENABLED | Enable/disable the memory cache |
NAVIUS_CACHE_MEMORY_CAPACITY | Memory cache capacity |
NAVIUS_CACHE_REDIS_ENABLED | Enable/disable the Redis cache |
NAVIUS_CACHE_REDIS_URL | Redis connection URL |
NAVIUS_CACHE_REDIS_PASSWORD | Redis password |
NAVIUS_CACHE_TWO_TIER_ENABLED | Enable/disable the two-tier cache |
Configuration Examples
Basic Memory-Only Setup
cache:
enabled: true
default_provider: "memory"
providers:
- name: "memory"
type: "moka"
capacity: 5000
Production Redis Setup
cache:
enabled: true
default_provider: "redis"
providers:
- name: "redis"
type: "redis"
connection_string: "redis://${REDIS_HOST}:${REDIS_PORT}"
password: "${REDIS_PASSWORD}"
connection_pool_size: 20
timeout_ms: 500
Development Two-Tier Setup
cache:
enabled: true
default_provider: "two-tier"
providers:
- name: "memory"
type: "moka"
capacity: 1000
- name: "redis"
type: "redis"
connection_string: "redis://localhost:6379"
two_tier:
enabled: true
fast_cache_provider: "memory"
slow_cache_provider: "redis"
fast_cache_ttl_seconds: 30
slow_cache_ttl_seconds: 300
Disabling Cache for Testing
cache:
enabled: false
Reading Cache Configuration in Code
Here's how to access the cache configuration from your code:
#![allow(unused)] fn main() { use navius::core::config::CacheConfig; fn initialize_cache(config: &CacheConfig) { if !config.enabled { println!("Cache is disabled"); return; } println!("Using default provider: {}", config.default_provider); // Access provider-specific configuration if let Some(redis_config) = config.get_provider_config("redis") { println!("Redis connection: {}", redis_config.get_string("connection_string").unwrap()); } // Access two-tier configuration if let Some(two_tier_config) = config.two_tier.as_ref() { if two_tier_config.enabled { println!("Two-tier cache is enabled"); println!("Fast cache TTL: {}s", two_tier_config.fast_cache_ttl_seconds); println!("Slow cache TTL: {}s", two_tier_config.slow_cache_ttl_seconds); } } } }
Dynamic Cache Configuration
Navius supports dynamic cache configuration changes through the configuration management system. When the configuration is updated, the cache system will automatically apply the relevant changes without requiring a server restart.
Changes that can be applied dynamically:
- Enabling/disabling the cache
- Changing TTL values
- Adjusting capacity limits
- Modifying eviction policies
Changes that require a restart:
- Adding/removing cache providers
- Changing connection strings
- Switching the default provider type
title: "Feature Configuration Reference" description: "Detailed reference for configuring the Server Customization System in Navius applications" category: reference tags:
- reference
- configuration
- features
- server-customization
- settings related:
- ../../guides/features/server-customization-cli.md
- ../../examples/server-customization-example.md
- ../../feature-system.md last_updated: March 27, 2025 version: 1.0
Feature Configuration Reference
This document provides detailed information about configuring the Server Customization System in Navius.
Overview
The Server Customization System allows you to selectively enable or disable features based on your specific requirements, optimizing performance, reducing attack surface, and customizing the server to your exact needs.
Configuration Structure
Feature configuration can be defined in your application configuration files:
features:
# Global feature configuration
discovery_enabled: true
# Explicitly enabled features
enabled_features:
- core
- security
- metrics
- caching
- auth:oauth
# Explicitly disabled features
disabled_features:
- advanced_metrics
- tracing
# Feature-specific configuration
feature_config:
caching:
redis_enabled: true
memory_cache_size: 10000
metrics:
collect_interval_seconds: 15
enable_prometheus: true
auth:
providers:
- type: oauth
enabled: true
- type: saml
enabled: false
Configuration Options
Global Feature Options
Option | Type | Default | Description |
---|---|---|---|
discovery_enabled | boolean | true | Enables automatic discovery of features |
enabled_features | array | [] | List of features to explicitly enable |
disabled_features | array | [] | List of features to explicitly disable |
feature_config | object | {} | Feature-specific configuration options |
Feature Configuration
Each feature can have its own configuration section. Here are the common pattern options:
feature_config:
feature_name:
enabled: true # Can override the global enabled setting
# Feature-specific options follow
Feature Reference
Core Features
Core features are always enabled and cannot be disabled.
Feature ID | Description |
---|---|
core | Core server functionality, required for operation |
config | Configuration system, required for operation |
error_handling | Error handling framework, required for operation |
Optional Features
These features can be enabled or disabled based on your needs.
Feature ID | Description | Dependencies |
---|---|---|
metrics | Metrics collection and reporting | none |
advanced_metrics | Enhanced metrics for detailed monitoring | metrics |
tracing | Distributed tracing support | none |
logging | Logging system | none |
structured_logging | Structured logging with JSON output | logging |
caching | Basic caching functionality | none |
redis_caching | Redis cache provider | caching |
two_tier_caching | Two-tier cache system | caching |
auth | Authentication system | none |
auth:oauth | OAuth2 authentication provider | auth |
auth:saml | SAML authentication provider | auth |
auth:jwt | JWT authentication and validation | auth |
security | Security features | none |
rate_limiting | Rate limiting for API endpoints | security |
api | API framework | none |
graphql | GraphQL support | api |
rest | REST API support | api |
db | Database abstraction | none |
db:postgres | PostgreSQL support | db |
db:mysql | MySQL support | db |
reliability | Reliability patterns | none |
circuit_breaker | Circuit breaker pattern | reliability |
retry | Automatic retry with backoff | reliability |
timeout | Request timeout handling | reliability |
fallback | Fallback mechanisms | reliability |
bulkhead | Isolation patterns | reliability |
cli | Command-line interface | none |
scheduler | Task scheduling | none |
websocket | WebSocket support | none |
sse | Server-sent events support | none |
Environment Variables
You can also configure features using environment variables:
Environment Variable | Description |
---|---|
NAVIUS_FEATURES_DISCOVERY_ENABLED | Enable/disable feature discovery (true /false ) |
NAVIUS_FEATURES_ENABLED | Comma-separated list of features to enable |
NAVIUS_FEATURES_DISABLED | Comma-separated list of features to disable |
NAVIUS_FEATURE_METRICS_ENABLED | Enable/disable specific feature (e.g., metrics) |
NAVIUS_FEATURE_CACHING_REDIS_ENABLED | Enable/disable feature-specific option |
Feature File
You can also specify features using a YAML or JSON file:
# features.yaml
enabled:
- core
- security
- caching
- metrics
disabled:
- tracing
- advanced_metrics
configuration:
caching:
redis_enabled: true
Load this configuration using:
./navius --features=features.yaml
Or specify the environment variable:
NAVIUS_FEATURES_FILE=features.yaml ./navius
Feature Detection in Code
You can check for feature availability at runtime:
#![allow(unused)] fn main() { use navius::core::features::RuntimeFeatures; fn initialize_metrics(features: &RuntimeFeatures) { if !features.is_enabled("metrics") { return; } // Initialize metrics println!("Initializing metrics..."); // Check for advanced metrics if features.is_enabled("advanced_metrics") { println!("Initializing advanced metrics..."); } } }
Feature Dependency Resolution
When a feature is enabled, its dependencies are automatically enabled as well. For example, enabling advanced_metrics
will automatically enable metrics
.
features:
enabled_features:
- advanced_metrics # This will automatically enable 'metrics' as well
Likewise, when a feature is disabled, any features that depend on it will also be disabled.
Feature Configuration API
The Server Customization System provides a programmatic API for configuring features:
#![allow(unused)] fn main() { use navius::core::features::{FeatureRegistry, FeatureConfig}; fn configure_features() -> FeatureRegistry { let mut registry = FeatureRegistry::new(); // Enable specific features registry.enable("caching").unwrap(); registry.enable("security").unwrap(); // Disable specific features registry.disable("tracing").unwrap(); // Configure a specific feature let mut caching_config = FeatureConfig::new("caching"); caching_config.set_option("redis_enabled", true); caching_config.set_option("memory_cache_size", 10000); registry.configure(caching_config).unwrap(); // Resolve dependencies registry.resolve_dependencies().unwrap(); registry } }
Best Practices
- Start Minimal: Begin with only the essential features enabled, then add more as needed
- Group Related Features: Use feature groups to enable/disable related functionality
- Test Combinations: Test various feature combinations to ensure they work together
- Document Enabled Features: Keep track of which features are enabled in your deployment
- Monitor Impact: Watch for performance changes when enabling/disabling features
- Use Environment-Specific Configurations: Create different feature configurations for development, testing, and production
Feature Optimization Techniques
The Server Customization System uses several techniques to optimize the application based on enabled features:
- Compile-Time Exclusion: Features can be excluded at compile time using Cargo features
- Conditional Code: Code blocks can be conditionally executed based on feature availability
- Dynamic Loading: Some features can be dynamically loaded only when needed
- Dependency Tree Pruning: Dependencies are only included if required by enabled features
Example Configurations
Minimal Server (API Only)
features:
enabled_features:
- core
- api
- rest
- security
disabled_features:
- metrics
- tracing
- caching
- graphql
- websocket
- scheduler
Full Monitoring Server
features:
enabled_features:
- core
- metrics
- advanced_metrics
- tracing
- structured_logging
- security
disabled_features:
- api
- db
Production API Server
features:
enabled_features:
- core
- api
- rest
- security
- metrics
- caching
- redis_caching
- two_tier_caching
- reliability
- circuit_breaker
- retry
- timeout
- rate_limiting
disabled_features:
- advanced_metrics
- tracing
- websocket
- sse
Logging Config
Security Config
Patterns
title: "API Resource Abstraction Pattern" description: "Documentation about API Resource Abstraction Pattern" category: reference tags:
- api
- caching last_updated: March 27, 2025 version: 1.0
API Resource Abstraction Pattern
This document explains the API resource abstraction pattern used in our project, which provides a unified way to handle API resources with built-in reliability features.
Overview
The API resource abstraction provides a clean, consistent pattern for handling external API interactions with the following features:
- Automatic caching: Resources are cached to reduce latency and external API calls
- Retry mechanism: Failed API calls are retried with exponential backoff
- Consistent error handling: All API errors are handled in a consistent way
- Standardized logging: API interactions are logged with consistent format
- Type safety: Strong typing ensures correctness at compile time
Core Components
The abstraction consists of the following components:
- ApiResource trait: Interface that resources must implement
- ApiHandlerOptions: Configuration options for handlers
- create_api_handler: Factory function to create Axum handlers with reliability features
- Support functions: Caching and retry helpers
Using the Pattern
1. Implementing ApiResource for your model
#![allow(unused)] fn main() { use crate::utils::api_resource::ApiResource; // Your model structure #[derive(Debug, Clone, Serialize, Deserialize)] struct User { id: i64, name: String, email: String, } // Implement ApiResource for your model impl ApiResource for User { type Id = i64; // The type of the ID field fn resource_type() -> &'static str { "user" // Used for caching and logging } fn api_name() -> &'static str { "UserService" // Used for logging } } }
2. Creating a Fetch Function
#![allow(unused)] fn main() { async fn fetch_user(state: &Arc<AppState>, id: i64) -> Result<User> { let url = format!("{}/users/{}", state.config.user_service_url, id); // Create a closure that returns the actual request future let fetch_fn = || async { state.client.get(&url).send().await }; // Make the API call using the common logger/handler api_logger::api_call("UserService", &url, fetch_fn, "User", id).await } }
3. Creating an API Handler
#![allow(unused)] fn main() { pub async fn get_user_handler( State(state): State<Arc<AppState>>, Path(id): Path<String>, ) -> Result<Json<User>> { // Create an API handler with reliability features let handler = create_api_handler( fetch_user, ApiHandlerOptions { use_cache: true, use_retries: true, }, ); // Execute the handler handler(State(state), Path(id)).await } }
Configuration Options
The ApiHandlerOptions
struct provides the following configuration options:
#![allow(unused)] fn main() { struct ApiHandlerOptions { use_cache: bool, // Whether to use caching use_retries: bool, // Whether to retry failed requests } }
Best Practices
- Keep fetch functions simple: They should focus on the API call logic
- Use consistent naming: Name conventions help with maintenance
- Add appropriate logging: Additional context helps with debugging
- Handle errors gracefully: Return appropriate error codes to clients
- Test thoroughly: Verify behavior with unit tests for each handler
Example Use Cases
Basic Handler with Default Options
#![allow(unused)] fn main() { pub async fn get_product_handler( State(state): State<Arc<AppState>>, Path(id): Path<String>, ) -> Result<Json<Product>> { create_api_handler( fetch_product, ApiHandlerOptions { use_cache: true, use_retries: true, }, )(State(state), Path(id)).await } }
Custom Handler with Specific Options
#![allow(unused)] fn main() { pub async fn get_weather_handler( State(state): State<Arc<AppState>>, Path(location): Path<String>, ) -> Result<Json<Weather>> { create_api_handler( fetch_weather, ApiHandlerOptions { use_cache: true, // Weather data can be cached use_retries: false, // Weather requests shouldn't retry }, )(State(state), Path(location)).await } }
Troubleshooting
Cache Not Working
If caching isn't working as expected:
- Verify the
use_cache
option is set totrue
- Ensure the
ApiResource
implementation is correct - Check if the cache is enabled in the application state
Retries Not Working
If retries aren't working as expected:
- Verify the
use_retries
option is set totrue
- Check the error type (only service errors are retried)
- Inspect the logs for retry attempts
Future Enhancements
Planned enhancements to the pattern include:
- Configurable retry policies (max attempts, backoff strategy)
- Cache TTL options per resource type
- Circuit breaker pattern for failing services
Related Documents
- API Standards - API design guidelines
- Error Handling - Error handling patterns
title: "Cache Provider Pattern" description: "Design and implementation of the cache provider pattern with pluggable providers" category: patterns tags:
- patterns
- cache
- performance
- architecture
- providers related:
- reference/patterns/repository-pattern.md
- reference/api/cache-api.md
- examples/cache-provider-example.md
- examples/two-tier-cache-example.md last_updated: March 27, 2025 version: 1.0
Cache Provider Pattern
Overview
The Cache Provider Pattern is an architectural approach that abstracts caching operations behind a generic interface with pluggable provider implementations. This enables applications to work with different caching technologies through a consistent API while allowing for flexible switching between implementations.
Problem Statement
Applications often need caching to improve performance and reduce load on backend systems, but direct coupling to specific caching technologies creates several challenges:
- Difficult to switch between caching providers (e.g., in-memory to Redis)
- Testing is complicated by dependencies on external caching systems
- Code becomes tightly coupled to specific caching APIs
- Difficult to implement advanced caching strategies like multi-level caching
- Limited ability to fine-tune caching based on resource characteristics
Solution: Cache Provider Pattern with Pluggable Providers
The Cache Provider Pattern in Navius uses a provider-based architecture with these components:
- CacheOperations Trait: Defines core caching operations
- CacheProvider Trait: Creates cache instances
- CacheProviderRegistry: Manages and selects appropriate providers
- CacheConfig: Configures cache behavior and settings
- CacheService: Orchestrates cache operations
- TwoTierCache: Implements advanced multi-level caching
Pattern Structure
βββββββββββββββββββ creates βββββββββββββββββββββ
β CacheService βββββββββββββββββββCacheProviderRegistryβ
ββββββββββ¬βββββββββ βββββββββββ¬ββββββββββ
β β selects
β βΌ
β βββββββββββββββββββββ
β β CacheProvider β
β βββββββββββ¬ββββββββββ
β β creates
β βΌ
β uses βββββββββββββββββββββ
ββββββββββββββββββββββββββΆβ CacheOperations β
βββββββββββββββββββββ
Implementation
1. Cache Operations Interface
The CacheOperations
trait defines the contract for all cache implementations:
#![allow(unused)] fn main() { #[async_trait] pub trait CacheOperations<T: Send + Sync + Clone + 'static>: Send + Sync { /// Get a value from the cache async fn get(&self, key: &str) -> Option<T>; /// Set a value in the cache with optional TTL async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError>; /// Delete a value from the cache async fn delete(&self, key: &str) -> Result<bool, CacheError>; /// Clear the entire cache async fn clear(&self) -> Result<(), CacheError>; /// Get cache statistics fn stats(&self) -> CacheStats; } }
2. Cache Provider Interface
The CacheProvider
trait enables creating cache instances:
#![allow(unused)] fn main() { #[async_trait] pub trait CacheProvider: Send + Sync { /// Create a new cache instance async fn create_cache<T: Send + Sync + Clone + 'static>( &self, config: CacheConfig ) -> Result<Box<dyn CacheOperations<T>>, CacheError>; /// Check if this provider supports the given configuration fn supports(&self, config: &CacheConfig) -> bool; /// Get the name of this provider fn name(&self) -> &str; } }
3. Cache Service
The CacheService
manages cache instances and provides access to them:
#![allow(unused)] fn main() { pub struct CacheService { provider_registry: Arc<RwLock<CacheProviderRegistry>>, config_by_resource: HashMap<String, CacheConfig>, default_config: CacheConfig, } impl CacheService { pub fn new(registry: CacheProviderRegistry) -> Self { Self { provider_registry: Arc::new(RwLock::new(registry)), config_by_resource: HashMap::new(), default_config: CacheConfig::default(), } } pub async fn create_cache<T: Send + Sync + Clone + 'static>( &self, resource_name: &str ) -> Result<Box<dyn CacheOperations<T>>, CacheError> { // Use registry to create appropriate cache instance for the resource } pub async fn create_two_tier_cache<T: Send + Sync + Clone + 'static>( &self, resource_name: &str, config: TwoTierCacheConfig ) -> Result<TwoTierCache<T>, CacheError> { // Create a two-tier cache with fast and slow caches } } }
Benefits
- Abstraction: Decouples application from specific caching technologies
- Testability: Simplifies testing with in-memory cache implementations
- Flexibility: Easy to switch between cache providers
- Multi-Level Caching: Supports advanced caching strategies
- Type Safety: Generic typing ensures type safety across cache operations
- Configuration: Resource-specific cache configuration
- Metrics: Consistent cache statistics and monitoring
Implementation Considerations
1. Cache Key Management
Proper key management is essential for effective caching:
- Use consistent key generation strategies
- Include resource type in keys to prevent collisions
- Consider using prefixes for different resources
- Support key namespaces to isolate different parts of the application
2. Time-to-Live (TTL) Strategies
Different resources may need different TTL strategies:
- Critical, frequently changing data: shorter TTLs
- Static content: longer TTLs, potentially indefinite
- User-specific data: session-based TTLs
- Two-tier caching: different TTLs for each tier
3. Cache Eviction Policies
Implement appropriate eviction policies:
- LRU (Least Recently Used)
- LFU (Least Frequently Used)
- Size-based eviction
- Time-based expiration
- Custom eviction strategies
4. Cache Synchronization
In distributed environments, consider cache synchronization:
- Implement cache invalidation messaging
- Use versioning for cache entries
- Consider eventual consistency implications
- Use distributed cache systems (Redis) for shared state
5. Error Handling
Caching should not affect critical application flow:
- Fail gracefully when cache operations fail
- Log cache errors without disrupting user operations
- Consider cache-aside pattern for resilience
- Implement circuit breaker for cache operations
Example Implementations
In-Memory Cache
#![allow(unused)] fn main() { pub struct MemoryCache<T> { cache: Arc<Mutex<moka::sync::Cache<String, T>>>, config: CacheConfig, stats: Arc<CacheStats>, } #[async_trait] impl<T: Send + Sync + Clone + 'static> CacheOperations<T> for MemoryCache<T> { async fn get(&self, key: &str) -> Option<T> { let result = self.cache.lock().unwrap().get(key).cloned(); // Update stats if result.is_some() { self.stats.increment_hits(); } else { self.stats.increment_misses(); } result } async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError> { let ttl = ttl.unwrap_or(self.config.ttl); let mut cache = self.cache.lock().unwrap(); if let Some(ttl) = ttl { cache.insert_with_ttl(key.to_string(), value, ttl); } else { cache.insert(key.to_string(), value); } Ok(()) } // Other methods implementation... } }
Redis Cache
#![allow(unused)] fn main() { pub struct RedisCache<T> { client: redis::Client, serializer: Box<dyn Serializer<T>>, config: CacheConfig, stats: Arc<CacheStats>, } #[async_trait] impl<T: Send + Sync + Clone + 'static> CacheOperations<T> for RedisCache<T> { async fn get(&self, key: &str) -> Option<T> { let mut conn = self.client.get_async_connection().await.ok()?; let result: Option<String> = redis::cmd("GET") .arg(key) .query_async(&mut conn) .await .ok()?; let value = result.and_then(|data| self.serializer.deserialize(&data).ok()); // Update stats if value.is_some() { self.stats.increment_hits(); } else { self.stats.increment_misses(); } value } async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError> { let mut conn = self.client.get_async_connection().await?; let data = self.serializer.serialize(&value)?; let ttl = ttl.unwrap_or(self.config.ttl); if let Some(ttl) = ttl { redis::cmd("SETEX") .arg(key) .arg(ttl.as_secs()) .arg(data) .query_async(&mut conn) .await?; } else { redis::cmd("SET") .arg(key) .arg(data) .query_async(&mut conn) .await?; } Ok(()) } // Other methods implementation... } }
Two-Tier Cache Implementation
#![allow(unused)] fn main() { pub struct TwoTierCache<T> { fast_cache: Box<dyn CacheOperations<T>>, slow_cache: Box<dyn CacheOperations<T>>, promote_on_get: bool, fast_ttl: Option<Duration>, } #[async_trait] impl<T: Send + Sync + Clone + 'static> CacheOperations<T> for TwoTierCache<T> { async fn get(&self, key: &str) -> Option<T> { // Try fast cache first if let Some(value) = self.fast_cache.get(key).await { return Some(value); } // If not in fast cache, try slow cache if let Some(value) = self.slow_cache.get(key).await { // Promote to fast cache if configured to do so if self.promote_on_get { let value_clone = value.clone(); let _ = self.fast_cache.set(key, value_clone, self.fast_ttl).await; } return Some(value); } None } async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError> { // Set in both caches let value_clone = value.clone(); let fast_result = self.fast_cache.set(key, value_clone, self.fast_ttl).await; let slow_result = self.slow_cache.set(key, value, ttl).await; // Return error if either operation failed fast_result.and(slow_result) } // Other methods implementation... } }
API Example
#![allow(unused)] fn main() { // Get the cache service let cache_service = service_registry.get::<CacheService>(); // Create a typed cache for users let user_cache: Box<dyn CacheOperations<UserDto>> = cache_service.create_cache("users").await?; // Create a user let user = UserDto { id: "user-123".to_string(), name: "Alice".to_string(), email: "[email protected]".to_string(), }; // Cache the user with 5 minute TTL user_cache.set(&user.id, user.clone(), Some(Duration::from_secs(300))).await?; // Get the user from cache if let Some(cached_user) = user_cache.get(&user.id).await { println!("Found user: {}", cached_user.name); } // Create a two-tier cache let config = TwoTierCacheConfig::new() .with_fast_provider("memory") .with_slow_provider("redis") .with_fast_ttl(Duration::from_secs(60)) .with_slow_ttl(Duration::from_secs(3600)) .with_promotion_enabled(true); let two_tier_cache = cache_service .create_two_tier_cache::<ProductDto>("products", config) .await?; // Use the two-tier cache two_tier_cache.set("product-1", product, None).await?; }
Related Patterns
- Repository Pattern: Often used with Cache Provider Pattern for cached data access
- Strategy Pattern: Different cache providers implement different strategies
- Adapter Pattern: Adapts specific cache APIs to the common interface
- Decorator Pattern: Used in two-tier caching to layer cache functionality
- Factory Pattern: Used to create cache instances
- Builder Pattern: Used for configuration building
References
title: "Caching Patterns in Navius" description: "Technical reference documentation for common caching patterns used throughout the Navius framework" category: reference tags:
- reference
- patterns
- caching
- performance
- best-practices related:
- ../../guides/caching-strategies.md
- ../configuration/cache-config.md
- ../../examples/two-tier-cache-example.md last_updated: March 27, 2025 version: 1.0
Caching Patterns in Navius
This reference document describes the common caching patterns used throughout the Navius framework and how to implement them effectively.
Core Caching Patterns
Cache-Aside Pattern
The most common caching pattern used in Navius is the Cache-Aside (Lazy Loading) pattern:
#![allow(unused)] fn main() { async fn get_data(&self, key: &str) -> Result<Data, Error> { // Try to get from cache first if let Some(cached_data) = self.cache.get(key).await? { return Ok(cached_data); } // Not in cache, fetch from the source let data = self.data_source.fetch(key).await?; // Store in cache for future requests self.cache.set(key, &data, Some(Duration::from_secs(300))).await?; Ok(data) } }
This pattern:
- Checks the cache first
- Falls back to the source only if needed
- Updates the cache with the fetched data
- Returns the data to the client
Write-Through Cache
For data consistency when writing, Navius recommends the Write-Through pattern:
#![allow(unused)] fn main() { async fn save_data(&self, key: &str, data: &Data) -> Result<(), Error> { // Write to the source first self.data_source.save(key, data).await?; // Then update the cache self.cache.set(key, data, Some(Duration::from_secs(300))).await?; Ok(()) } }
This pattern:
- Ensures data is safely stored in the primary source first
- Updates the cache to maintain consistency
- Provides fast reads while ensuring write durability
Cache Invalidation
For invalidation, Navius uses the direct invalidation pattern:
#![allow(unused)] fn main() { async fn delete_data(&self, key: &str) -> Result<(), Error> { // Delete from the source self.data_source.delete(key).await?; // Invalidate cache self.cache.delete(key).await?; Ok(()) } }
Multi-Level Caching
Two-Tier Cache
Navius implements the Two-Tier Cache pattern for optimal performance:
#![allow(unused)] fn main() { // Two-tier cache implementation pub struct TwoTierCache { fast_cache: Box<dyn CacheOperations>, // In-memory cache slow_cache: Box<dyn CacheOperations>, // Redis or other distributed cache } impl CacheOperations for TwoTierCache { async fn get(&self, key: &str) -> Result<Option<Vec<u8>>> { // Try fast cache first if let Some(data) = self.fast_cache.get(key).await? { return Ok(Some(data)); } // Then try slow cache if let Some(data) = self.slow_cache.get(key).await? { // Promote to fast cache self.fast_cache.set(key, &data, None).await?; return Ok(Some(data)); } Ok(None) } } }
This pattern:
- Provides extremely fast access for frequently used data
- Maintains resiliency through the slower but more durable cache
- Automatically promotes items to the faster cache when accessed
Specialized Caching Patterns
Time-Based Expiration
For caches that need automatic expiration:
#![allow(unused)] fn main() { async fn get_with_ttl(&self, key: &str, ttl: Duration) -> Result<Option<Data>, Error> { let data = self.cache.get(key).await?; // Set with TTL when storing self.cache.set(key, &data, Some(ttl)).await?; Ok(data) } }
Collection Caching
For caching collections of items:
#![allow(unused)] fn main() { async fn get_collection(&self, collection_key: &str) -> Result<Vec<Item>, Error> { // Try to get collection from cache if let Some(items) = self.cache.get(collection_key).await? { return Ok(items); } // Fetch collection let items = self.repository.get_all().await?; // Cache the collection self.cache.set(collection_key, &items, Some(Duration::from_secs(60))).await?; // Optionally, cache individual items too for item in &items { let item_key = format!("item:{}", item.id); self.cache.set(&item_key, item, Some(Duration::from_secs(300))).await?; } Ok(items) } }
Read-Through Cache
For simplicity in some scenarios:
#![allow(unused)] fn main() { // Read-through cache implementation pub struct ReadThroughCache<T> { cache: Box<dyn CacheOperations>, data_source: Arc<dyn DataSource<T>>, } impl<T> ReadThroughCache<T> { async fn get(&self, key: &str) -> Result<Option<T>, Error> { // Try to get from cache if let Some(data) = self.cache.get(key).await? { return Ok(Some(data)); } // Not in cache, fetch from source let data = self.data_source.get(key).await?; if let Some(data_ref) = &data { // Store in cache for future self.cache.set(key, data_ref, None).await?; } Ok(data) } } }
Cache Eviction Strategies
Navius caching system supports various eviction strategies:
- LRU (Least Recently Used): Evicts the least recently accessed items first
- LFU (Least Frequently Used): Evicts the least frequently accessed items first
- FIFO (First In First Out): Evicts the oldest entries first
- TTL (Time To Live): Evicts entries that have expired based on a set duration
Example configuration:
cache:
providers:
- name: "memory"
eviction_policy: "LRU" # Options: LRU, LFU, FIFO, TTL
capacity: 10000
Cache Key Design Patterns
Prefix-Based Keys
Navius recommends prefix-based keys for organization:
#![allow(unused)] fn main() { // User-related keys let user_key = format!("user:{}", user_id); let user_prefs_key = format!("user:{}:preferences", user_id); let user_sessions_key = format!("user:{}:sessions", user_id); // Content-related keys let content_key = format!("content:{}", content_id); let content_views_key = format!("content:{}:views", content_id); }
Composite Keys
For complex lookups:
#![allow(unused)] fn main() { // Composite key for filtered search results let search_key = format!("search:{}:filter:{}:page:{}", query, filter_hash, page_number); }
Cache Implementation Pattern
Navius follows this pattern for integrating caches with services:
#![allow(unused)] fn main() { pub struct EntityService<T> { cache: Arc<dyn TypedCache<T>>, repository: Arc<dyn EntityRepository<T>>, } impl<T> EntityService<T> { // Constructor with cache pub fn new(cache: Arc<dyn TypedCache<T>>, repository: Arc<dyn EntityRepository<T>>) -> Self { Self { cache, repository } } // Get entity by ID with caching pub async fn get_by_id(&self, id: &str) -> Result<Option<T>, Error> { let cache_key = format!("entity:{}", id); // Try cache first if let Some(entity) = self.cache.get(&cache_key).await? { return Ok(Some(entity)); } // Not in cache, get from repository if let Some(entity) = self.repository.find_by_id(id).await? { // Cache for future self.cache.set(&cache_key, &entity, None).await?; return Ok(Some(entity)); } Ok(None) } } }
Best Practices
-
Cache Appropriate Data: Cache data that is:
- Frequently accessed
- Expensive to compute or retrieve
- Relatively static (doesn't change often)
-
Set Appropriate TTLs: Consider:
- How frequently data changes
- Tolerance for stale data
- Memory constraints
-
Cache Consistency:
- Update or invalidate cache entries when the source data changes
- Consider using event-based cache invalidation for distributed systems
-
Error Handling:
- Treat cache failures as non-critical
- Implement fallbacks when cache is unavailable
- Log cache errors but continue operation
-
Monitoring:
- Track cache hit/miss ratios
- Monitor memory usage
- Set alerts for unusual patterns
Anti-Patterns to Avoid
- Caching Everything: Not all data benefits from caching
- No Eviction Policy: Always implement some form of eviction
- Unbounded Cache Growth: Set capacity limits
- Cache Stampede: Use techniques like request coalescing to prevent multiple identical fetches
- Ignoring Cache Errors: Implement proper error handling
Related Components
title: "Database Service Pattern" description: "Design and implementation of the database service pattern with pluggable providers" category: patterns tags:
- patterns
- database
- architecture
- providers related:
- reference/patterns/repository-pattern.md
- reference/api/database-api.md
- examples/database-service-example.md last_updated: March 27, 2025 version: 1.0
Database Service Pattern
Overview
The Database Service Pattern provides a generic abstraction for database operations with pluggable provider implementations. This enables applications to work with different database technologies through a consistent interface while allowing for easy switching between implementations.
Problem Statement
Applications typically need to interact with databases, but direct coupling to specific database technologies creates several challenges:
- Difficult to switch between database providers (e.g., PostgreSQL to MongoDB)
- Testing is complicated by dependencies on actual database instances
- Code becomes tightly coupled to specific database APIs
- Difficult to implement caching or other cross-cutting concerns
- Limited ability to leverage different databases for different use cases
Solution: Database Service Pattern with Pluggable Providers
The Database Service Pattern in Navius uses a provider-based architecture with these components:
- DatabaseOperations Trait: Defines core database operations
- DatabaseProvider Trait: Creates database instances
- DatabaseProviderRegistry: Manages and selects appropriate providers
- DatabaseConfig: Configures database connections and behavior
- DatabaseService: Orchestrates database operations
Pattern Structure
ββββββββββββββββββββββ creates βββββββββββββββββββββββ
β DatabaseService βββββββββββββββββββDatabaseProviderRegistryβ
ββββββββββ¬ββββββββββββ βββββββββββ¬ββββββββββββ
β β selects
β βΌ
β βββββββββββββββββββββββ
β β DatabaseProvider β
β βββββββββββ¬ββββββββββββ
β β creates
β βΌ
β uses βββββββββββββββββββββββ
βββββββββββββββββββββββββββββββ DatabaseOperations β
βββββββββββββββββββββββ
Implementation
1. Database Operations Interface
The DatabaseOperations
trait defines the contract for all database implementations:
#![allow(unused)] fn main() { #[async_trait] pub trait DatabaseOperations: Send + Sync { /// Get a value from the database async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError>; /// Set a value in the database async fn set(&self, collection: &str, key: &str, value: &str) -> Result<(), ServiceError>; /// Delete a value from the database async fn delete(&self, collection: &str, key: &str) -> Result<bool, ServiceError>; /// Query the database with a filter async fn query(&self, collection: &str, filter: &str) -> Result<Vec<String>, ServiceError>; /// Execute a database transaction with multiple operations async fn transaction<F, T>(&self, operations: F) -> Result<T, ServiceError> where F: FnOnce(&dyn DatabaseOperations) -> Result<T, ServiceError> + Send + 'static, T: Send + 'static; } }
2. Database Provider Interface
The DatabaseProvider
trait enables creating database instances:
#![allow(unused)] fn main() { #[async_trait] pub trait DatabaseProvider: Send + Sync { /// The type of database this provider creates type Database: DatabaseOperations; /// Create a new database instance async fn create_database(&self, config: DatabaseConfig) -> Result<Self::Database, ServiceError>; /// Check if this provider supports the given configuration fn supports(&self, config: &DatabaseConfig) -> bool; /// Get the name of this provider fn name(&self) -> &str; } }
3. Database Service
The DatabaseService
manages database instances and provides access to them:
#![allow(unused)] fn main() { pub struct DatabaseService { provider_registry: Arc<RwLock<DatabaseProviderRegistry>>, default_config: DatabaseConfig, } impl DatabaseService { pub fn new(registry: DatabaseProviderRegistry) -> Self { Self { provider_registry: Arc::new(RwLock::new(registry)), default_config: DatabaseConfig::default(), } } pub fn with_default_config(mut self, config: DatabaseConfig) -> Self { self.default_config = config; self } pub async fn create_database(&self) -> Result<Box<dyn DatabaseOperations>, ServiceError> { // Use registry to create appropriate database instance } } }
4. Provider Registry
The DatabaseProviderRegistry
stores available providers and selects the appropriate one:
#![allow(unused)] fn main() { pub struct DatabaseProviderRegistry { providers: HashMap<String, Box<dyn AnyDatabaseProvider>>, } impl DatabaseProviderRegistry { pub fn new() -> Self { Self { providers: HashMap::new(), } } pub fn register<P: DatabaseProvider + 'static>(&mut self, name: &str, provider: P) { self.providers.insert(name.to_string(), Box::new(provider)); } pub async fn create_database( &self, provider_name: &str, config: DatabaseConfig ) -> Result<Box<dyn DatabaseOperations>, ServiceError> { // Find provider and create database } } }
Benefits
- Abstraction: Decouples application from specific database technologies
- Testability: Simplifies testing with in-memory database implementations
- Flexibility: Easy to switch between database providers
- Consistency: Provides uniform interface for different database technologies
- Extensibility: New database providers can be added without changing client code
- Cross-Cutting Concerns: Enables adding logging, metrics, and caching consistently
Implementation Considerations
1. Transaction Support
Different databases have different transaction models:
- Relational databases have ACID transactions
- Some NoSQL databases have limited transaction support
- In-memory implementations may need to simulate transactions
The pattern should provide a consistent abstraction that works across different implementations:
#![allow(unused)] fn main() { // Example transaction usage db.transaction(|tx| { tx.set("users", "user-1", r#"{"name":"Alice"}"#)?; tx.set("accounts", "account-1", r#"{"owner":"user-1","balance":100}"#)?; Ok(()) }).await?; }
2. Query Language
Database technologies use different query languages (SQL, NoSQL query APIs). The pattern should provide:
- A simple string-based query interface for basic filtering
- Support for native query formats where needed
- Helpers for common query patterns
3. Connection Pooling
Database connections are often expensive resources:
- Implement connection pooling in database providers
- Configure pool sizes and connection timeouts
- Handle connection errors gracefully
4. Error Handling
Database errors should be mapped to application-specific errors:
- Create meaningful error categories (NotFound, Conflict, etc.)
- Include useful context in error messages
- Avoid exposing internal database details in errors
Example Implementations
In-Memory Database
#![allow(unused)] fn main() { pub struct InMemoryDatabase { data: Arc<RwLock<HashMap<String, HashMap<String, String>>>>, } #[async_trait] impl DatabaseOperations for InMemoryDatabase { async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError> { let data = self.data.read().await; if let Some(collection_data) = data.get(collection) { return Ok(collection_data.get(key).cloned()); } Ok(None) } async fn set(&self, collection: &str, key: &str, value: &str) -> Result<(), ServiceError> { let mut data = self.data.write().await; let collection_data = data.entry(collection.to_string()).or_insert_with(HashMap::new); collection_data.insert(key.to_string(), value.to_string()); Ok(()) } // Other methods implementation... } }
PostgreSQL Database
#![allow(unused)] fn main() { pub struct PostgresDatabase { pool: PgPool, } #[async_trait] impl DatabaseOperations for PostgresDatabase { async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError> { let query = format!( "SELECT data FROM {} WHERE id = $1", sanitize_identifier(collection) ); let result = sqlx::query_scalar(&query) .bind(key) .fetch_optional(&self.pool) .await .map_err(|e| ServiceError::database_error(e.to_string()))?; Ok(result) } // Other methods implementation... } }
API Example
#![allow(unused)] fn main() { // Get the database service let db_service = service_registry.get::<DatabaseService>(); // Create a database instance let db = db_service.create_database().await?; // Store user data let user_data = r#"{"id":"user-123","name":"Alice","role":"admin"}"#; db.set("users", "user-123", user_data).await?; // Retrieve user data if let Some(data) = db.get("users", "user-123").await? { let user: User = serde_json::from_str(&data)?; println!("Found user: {}", user.name); } // Query users by role let admins = db.query("users", "role='admin'").await?; println!("Found {} admin users", admins.len()); // Execute a transaction db.transaction(|tx| { tx.set("users", "user-1", r#"{"name":"Alice"}"#)?; tx.set("accounts", "account-1", r#"{"owner":"user-1","balance":100}"#)?; Ok(()) }).await?; }
Related Patterns
- Repository Pattern: Often used with Database Service Pattern to provide domain-specific data access
- Factory Pattern: Used to create database instances
- Strategy Pattern: Different database providers implement different strategies
- Adapter Pattern: Adapts specific database APIs to the common interface
- Builder Pattern: Used for configuration building
References
Error Handling
title: "Feature Selection Pattern" description: "Design and implementation of the feature selection pattern in Navius" category: patterns tags:
- patterns
- feature-flags
- configuration
- architecture related:
- reference/patterns/cache-provider-pattern.md
- examples/configuration-example.md last_updated: March 27, 2025 version: 1.0
Feature Selection Pattern
Overview
The Feature Selection Pattern in Navius provides a mechanism for dynamically enabling or disabling specific features or functionalities without changing the codebase. This pattern allows for feature toggling, A/B testing, progressive rollouts, and conditional feature access based on user roles, environments, or other criteria.
Problem Statement
Modern application development faces several challenges related to feature deployment and management:
- Continuous Deployment: How to deploy new features without affecting current users?
- Progressive Rollout: How to release features to a subset of users for testing?
- A/B Testing: How to compare different implementations of the same feature?
- Environment-Specific Behavior: How to enable features only in specific environments?
- User-Specific Access: How to restrict features to certain user roles or subscription levels?
- Performance Optimization: How to selectively enable resource-intensive features?
Solution: Feature Selection Pattern
The Feature Selection Pattern in Navius addresses these challenges through a unified feature flag system:
- Feature Flags: Named boolean switches that control feature availability
- Flag Sources: Multiple sources for flag values (config files, database, remote services)
- Evaluation Context: Context-aware feature resolution (user data, environment, etc.)
- Override Hierarchy: Clear precedence rules for conflicting flag values
- Runtime Toggles: Ability to change flag values without application restart
Pattern Structure
βββββββββββββββββββββ
β FeatureService β
βββββββββββ¬ββββββββββ
β
β uses
βΌ
βββββββββββββββββββββ consults βββββββββββββββββββββ
β FeatureRegistry ββββββββββββββββββΊ β Configuration β
βββββββββββββββββββββ βββββββββββββββββββββ
β β²
β contains β
βΌ β provides
βββββββββββββββββββββ βββββββββββββββββββββ
β FeatureFlag β β Flag Providers β
βββββββββββββββββββββ βββββββββββββββββββββ
Implementation
Feature Flag Definition
A feature flag is defined by its name, description, default value, and optional context-dependent rules:
#![allow(unused)] fn main() { pub struct FeatureFlag { /// Unique identifier for the feature pub name: String, /// Human-readable description of the feature pub description: String, /// Default value if no other rules match pub default_value: bool, /// Optional rules for contextual evaluation pub rules: Vec<FeatureRule>, } pub struct FeatureRule { /// Condition that must be satisfied for this rule to apply pub condition: Box<dyn FeatureCondition>, /// Value to use if the condition is met pub value: bool, } /// Trait for implementing feature flag conditions pub trait FeatureCondition: Send + Sync { /// Evaluate whether this condition applies in the given context fn evaluate(&self, context: &FeatureContext) -> bool; } }
Feature Registry
The feature registry maintains the collection of available feature flags:
#![allow(unused)] fn main() { pub struct FeatureRegistry { features: RwLock<HashMap<String, FeatureFlag>>, } impl FeatureRegistry { pub fn new() -> Self { Self { features: RwLock::new(HashMap::new()), } } pub fn register(&self, feature: FeatureFlag) -> Result<(), AppError> { let mut features = self.features.write().map_err(|_| { AppError::internal_server_error("Failed to acquire write lock on feature registry") })?; features.insert(feature.name.clone(), feature); Ok(()) } pub fn get(&self, name: &str) -> Result<FeatureFlag, AppError> { let features = self.features.read().map_err(|_| { AppError::internal_server_error("Failed to acquire read lock on feature registry") })?; features.get(name) .cloned() .ok_or_else(|| AppError::not_found(format!("Feature flag '{}' not found", name))) } } }
Feature Service
The feature service provides the primary API for checking feature flags:
#![allow(unused)] fn main() { pub struct FeatureService { registry: Arc<FeatureRegistry>, providers: Vec<Box<dyn FeatureFlagProvider>>, } impl FeatureService { pub fn new(registry: Arc<FeatureRegistry>, providers: Vec<Box<dyn FeatureFlagProvider>>) -> Self { Self { registry, providers, } } /// Check if a feature is enabled in the given context pub fn is_enabled(&self, name: &str, context: &FeatureContext) -> Result<bool, AppError> { // First check if any provider has an override for provider in &self.providers { if let Some(value) = provider.get_flag_value(name, context) { return Ok(value); } } // If no provider has an override, evaluate the feature flag let feature = self.registry.get(name)?; // Evaluate rules in order, using the first matching rule for rule in &feature.rules { if rule.condition.evaluate(context) { return Ok(rule.value); } } // If no rules match, use the default value Ok(feature.default_value) } /// Shorthand for checking if a feature is enabled with empty context pub fn is_feature_enabled(&self, name: &str) -> Result<bool, AppError> { self.is_enabled(name, &FeatureContext::default()) } } }
Feature Context
The context provides information for contextual feature evaluation:
#![allow(unused)] fn main() { pub struct FeatureContext { /// Current environment (dev, test, prod) pub environment: String, /// Current user information (if available) pub user: Option<UserContext>, /// Custom attributes for application-specific conditions pub attributes: HashMap<String, Value>, } pub struct UserContext { pub id: String, pub roles: Vec<String>, pub groups: Vec<String>, } impl FeatureContext { pub fn default() -> Self { Self { environment: "development".to_string(), user: None, attributes: HashMap::new(), } } pub fn for_environment(environment: &str) -> Self { let mut context = Self::default(); context.environment = environment.to_string(); context } pub fn for_user(user_id: &str, roles: Vec<String>, groups: Vec<String>) -> Self { let mut context = Self::default(); context.user = Some(UserContext { id: user_id.to_string(), roles, groups, }); context } } }
Feature Flag Providers
Providers supply feature flag values from different sources:
#![allow(unused)] fn main() { /// Trait for implementing feature flag providers pub trait FeatureFlagProvider: Send + Sync { /// Get the value for a feature flag in the given context /// Returns None if this provider doesn't have a value for the flag fn get_flag_value(&self, name: &str, context: &FeatureContext) -> Option<bool>; } /// Configuration-based provider that reads flags from the application config pub struct ConfigFeatureFlagProvider { config: Arc<AppConfig>, } impl FeatureFlagProvider for ConfigFeatureFlagProvider { fn get_flag_value(&self, name: &str, _context: &FeatureContext) -> Option<bool> { self.config.features.get(name).copied() } } /// Remote provider that fetches flags from a remote service pub struct RemoteFeatureFlagProvider { client: FeatureFlagClient, cache: Arc<dyn Cache>, cache_ttl: Duration, } impl FeatureFlagProvider for RemoteFeatureFlagProvider { fn get_flag_value(&self, name: &str, context: &FeatureContext) -> Option<bool> { // Check cache first if let Some(value) = self.cache.get::<bool>(name) { return Some(value); } // If not in cache, fetch from remote if let Ok(value) = self.client.get_flag_value(name, context) { // Update cache let _ = self.cache.set(name, value, self.cache_ttl); return Some(value); } None } } }
Usage Examples
Basic Feature Checking
#![allow(unused)] fn main() { // Check if a feature is enabled let analytics_enabled = feature_service.is_feature_enabled("enable_analytics")?; if analytics_enabled { analytics_service.track_event("page_view", &event_data); } }
Contextual Feature Checking
#![allow(unused)] fn main() { // Create context with user information let context = FeatureContext::for_user( &user.id, user.roles.clone(), user.groups.clone(), ); // Check if premium feature is available for this user let premium_enabled = feature_service.is_enabled("premium_features", &context)?; if premium_enabled { return Ok(Json(premium_content)); } else { return Err(AppError::forbidden("Premium subscription required")); } }
Environment-Based Features
#![allow(unused)] fn main() { // Create environment-specific context let context = FeatureContext::for_environment("production"); // Check if feature is enabled in this environment let beta_feature = feature_service.is_enabled("new_ui", &context)?; if beta_feature { // Use new UI components html_response.render_new_ui() } else { // Use classic UI components html_response.render_classic_ui() } }
Percentage Rollout
#![allow(unused)] fn main() { // Define a percentage rollout condition let percentage_condition = PercentageRolloutCondition::new("new_checkout", 25); // Create a feature flag with this condition let feature = FeatureFlag { name: "new_checkout".to_string(), description: "New checkout experience".to_string(), default_value: false, rules: vec![ FeatureRule { condition: Box::new(percentage_condition), value: true, }, ], }; // Register the feature feature_registry.register(feature)?; }
Feature Flag Configuration
# config/default.yaml
features:
enable_analytics: true
premium_features: false
new_ui: false
experimental_api: false
# config/production.yaml
features:
enable_analytics: true
premium_features: true
new_ui: false
experimental_api: false
Benefits
- Continuous Deployment: Deploy code with disabled features for later activation
- Risk Mitigation: Quickly disable problematic features without deployment
- Progressive Rollout: Release features to a subset of users for feedback
- A/B Testing: Compare different implementations with controlled exposure
- Operational Control: Manage resource-intensive features during peak loads
- Subscription Management: Tie feature access to user subscription levels
- Environment Control: Different behavior in development, testing, and production
Implementation Considerations
- Performance: Efficient feature flag checking, especially for high-traffic paths
- Default Behavior: Clear fallback behavior when flag evaluation fails
- Monitoring: Track feature flag usage and impact on application behavior
- Flag Cleanup: Process for removing unused feature flags over time
- Flag Discovery: Tools for developers to discover available feature flags
- Testing: Ability to test code paths for both enabled and disabled states
Advanced Techniques
Gradual Rollout Strategy
Implementing a gradual rollout based on user IDs:
#![allow(unused)] fn main() { pub struct GradualRolloutCondition { feature_name: String, rollout_percentage: u8, } impl FeatureCondition for GradualRolloutCondition { fn evaluate(&self, context: &FeatureContext) -> bool { if let Some(user) = &context.user { // Create a deterministic hash based on user ID and feature name let seed = format!("{}:{}", user.id, self.feature_name); let hash = calculate_hash(&seed); // Map hash to 0-100 range and compare with rollout percentage let user_value = hash % 100; return user_value < self.rollout_percentage as u64; } false } } }
Feature Combinations
Handling complex conditions with multiple factors:
#![allow(unused)] fn main() { pub struct AndCondition { conditions: Vec<Box<dyn FeatureCondition>>, } impl FeatureCondition for AndCondition { fn evaluate(&self, context: &FeatureContext) -> bool { self.conditions.iter().all(|c| c.evaluate(context)) } } pub struct OrCondition { conditions: Vec<Box<dyn FeatureCondition>>, } impl FeatureCondition for OrCondition { fn evaluate(&self, context: &FeatureContext) -> bool { self.conditions.iter().any(|c| c.evaluate(context)) } } }
Feature Dependencies
Handling features that depend on other features:
#![allow(unused)] fn main() { pub struct DependsOnFeatureCondition { dependency: String, } impl FeatureCondition for DependsOnFeatureCondition { fn evaluate(&self, context: &FeatureContext) -> bool { // Get the feature service from the context if let Some(feature_service) = context.get_service::<FeatureService>() { return feature_service.is_enabled(&self.dependency, context).unwrap_or(false); } false } } }
Feature Metrics
Tracking feature flag usage:
#![allow(unused)] fn main() { pub struct MetricsWrappedFeatureService { inner: Arc<FeatureService>, metrics: Arc<dyn MetricsCollector>, } impl MetricsWrappedFeatureService { pub fn is_enabled(&self, name: &str, context: &FeatureContext) -> Result<bool, AppError> { let start = Instant::now(); let result = self.inner.is_enabled(name, context); // Record metrics let elapsed = start.elapsed(); self.metrics.record_timing("feature.check.duration", elapsed); if let Ok(value) = result { self.metrics.increment_counter(&format!("feature.{}.{}", name, if value { "enabled" } else { "disabled" })); } else { self.metrics.increment_counter(&format!("feature.{}.error", name)); } result } } }
Related Patterns
- Strategy Pattern: Feature flags often select between strategy implementations
- Factory Pattern: Creating different implementations based on feature flags
- Decorator Pattern: Adding optional behavior when features are enabled
- Configuration Pattern: Feature flags are a special form of configuration
- Cache Provider Pattern: Caching feature flag values for performance
References
title: "Health Check Pattern" description: "Design and implementation of the health check pattern with pluggable health indicators" category: patterns tags:
- patterns
- health
- monitoring
- architecture related:
- reference/patterns/repository-pattern.md
- reference/api/health-api.md
- examples/health-service-example.md last_updated: March 27, 2025 version: 1.0
Health Check Pattern
Overview
The Health Check Pattern provides a standardized way to assess the operational status of an application and its dependencies. It enables monitoring systems to detect issues and facilitates automated recovery procedures.
Problem Statement
Modern applications have numerous dependencies (databases, external services, caches, etc.) that can fail independently. Applications need to:
- Report their own operational status
- Check the status of all dependencies
- Provide detailed diagnostics for troubleshooting
- Support both simple availability checks and detailed health information
- Allow easy extension for new components
Solution: Health Check Pattern with Pluggable Indicators
The Health Check Pattern in Navius uses a provider-based architecture with these components:
- HealthIndicator Trait: Interface for individual component health checks
- HealthProvider Trait: Interface for components that provide health indicators
- HealthDiscoveryService: Automatically discovers and registers health indicators
- HealthService: Orchestrates health checks and aggregates results
- HealthDashboard: Tracks health history and provides detailed reporting
Pattern Structure
βββββββββββββββββββ βββββββββββββββββββββ
β HealthService ββββββββββββ€HealthIndicator(s) β
ββββββββββ¬βββββββββ βββββββββββββββββββββ
β β²
β β implements
β ββββββββββ΄βββββββββ
β βComponent-specificβ
β βHealthIndicators β
βΌ βββββββββββββββββββ
βββββββββββββββββββ
βHealthController β
βββββββββββββββββββ
Implementation
1. Health Indicator Interface
The HealthIndicator
trait defines the contract for all health checks:
#![allow(unused)] fn main() { pub trait HealthIndicator: Send + Sync { /// Get the name of this health indicator fn name(&self) -> String; /// Check the health of this component fn check_health(&self, state: &Arc<AppState>) -> DependencyStatus; /// Optional metadata about this indicator fn metadata(&self) -> HashMap<String, String> { HashMap::new() } /// Order in which this indicator should run (lower values run first) fn order(&self) -> i32 { 0 } /// Whether this indicator is critical (system is DOWN if it fails) fn is_critical(&self) -> bool { false } } }
2. Health Provider Interface
The HealthProvider
trait enables components to provide their own health indicators:
#![allow(unused)] fn main() { pub trait HealthProvider: Send + Sync { /// Create health indicators for the application fn create_indicators(&self) -> Vec<Box<dyn HealthIndicator>>; /// Whether this provider is enabled fn is_enabled(&self, config: &AppConfig) -> bool; } }
3. Health Service
The HealthService
aggregates and manages health indicators:
#![allow(unused)] fn main() { pub struct HealthService { indicators: Vec<Box<dyn HealthIndicator>>, providers: Vec<Box<dyn HealthProvider>>, } impl HealthService { pub fn new() -> Self { /* ... */ } pub fn register_indicator(&mut self, indicator: Box<dyn HealthIndicator>) { /* ... */ } pub fn register_provider(&mut self, provider: Box<dyn HealthProvider>) { /* ... */ } pub async fn check_health(&self) -> Result<HealthStatus, ServiceError> { /* ... */ } } }
4. Health Discovery
The HealthDiscoveryService
automatically discovers health indicators:
#![allow(unused)] fn main() { pub struct HealthDiscoveryService; impl HealthDiscoveryService { pub fn new() -> Self { /* ... */ } pub async fn discover_indicators(&self) -> Vec<Box<dyn HealthIndicator>> { /* ... */ } } }
Benefits
- Standardization: Consistent approach to health monitoring across components
- Extensibility: Easy to add health checks for new components
- Automation: Facilitates automated monitoring and recovery
- Detailed Diagnostics: Provides rich health information for troubleshooting
- Dynamic Discovery: Automatically detects new health indicators
- Priority Execution: Checks dependencies in correct order
Implementation Considerations
1. Defining Health Status
Health status should be simple but descriptive:
- UP: Component is functioning normally
- DOWN: Component is not functioning
- DEGRADED: Component is functioning with reduced capabilities
- UNKNOWN: Component status cannot be determined
2. Health Check Categories
Organize health checks into categories:
- Critical Infrastructure: Database, cache, file system
- External Dependencies: APIs, third-party services
- Internal Components: Message queues, background tasks
- Environment: Disk space, memory, CPU
3. Health Check Response
The health API should support multiple response formats:
- Simple UP/DOWN for load balancers and basic monitoring
- Detailed response with component-specific health for diagnostics
- Historical data for trend analysis
4. Security Considerations
Health endpoints contain sensitive information:
- Secure detailed health endpoints with authentication
- Limit information in public health endpoints
- Don't expose connection strings or credentials
API Endpoints
The health service exposes these standard endpoints:
/actuator/health
: Basic health status (UP/DOWN)/actuator/health/detail
: Detailed component health/actuator/dashboard
: Health history dashboard
Example Implementation
Basic Health Indicator
#![allow(unused)] fn main() { pub struct DatabaseHealthIndicator { connection_string: String, } impl HealthIndicator for DatabaseHealthIndicator { fn name(&self) -> String { "database".to_string() } fn check_health(&self, _state: &Arc<AppState>) -> DependencyStatus { match check_database_connection(&self.connection_string) { Ok(_) => DependencyStatus::up(), Err(e) => DependencyStatus::down() .with_detail("error", e.to_string()) .with_detail("connection", &self.connection_string) } } fn is_critical(&self) -> bool { true } fn order(&self) -> i32 { 10 // Run early since other components may depend on DB } } }
Health Response Format
{
"status": "UP",
"timestamp": "2024-03-26T12:34:56.789Z",
"components": [
{
"name": "database",
"status": "UP",
"details": {
"type": "postgres",
"version": "14.5"
}
},
{
"name": "redis-cache",
"status": "UP",
"details": {
"used_memory": "1.2GB",
"uptime": "3d"
}
},
{
"name": "external-api",
"status": "DOWN",
"details": {
"error": "Connection timeout",
"url": "https://api.example.com/status"
}
}
]
}
Related Patterns
- Circuit Breaker Pattern: Used with health checks to prevent cascading failures
- Bulkhead Pattern: Isolates components to prevent system-wide failures
- Observer Pattern: Health indicators observe component status
- Repository Pattern: Often used with health checks for data access
- Strategy Pattern: Different health check strategies can be implemented
References
- Health Check API - Spring Boot Actuator
- Health Check Pattern - Cloud Design Patterns
- Kubernetes Liveness and Readiness Probes
title: "Import Pattern Guidelines" description: "Documentation about Import Pattern Guidelines" category: reference tags:
- api last_updated: March 27, 2025 version: 1.0
Import Pattern Guidelines
This document outlines the import pattern conventions used in the Navius project after the project restructuring.
Core Import Patterns
All imports from core modules should use the crate::core::
prefix:
#![allow(unused)] fn main() { // CORRECT use crate::core::error::{AppError, Result}; use crate::core::config::AppConfig; use crate::core::utils::api_resource::ApiResource; // INCORRECT use crate::error::{AppError, Result}; // Missing core prefix use crate::config::AppConfig; // Missing core prefix }
App Import Patterns
When app modules need to import from other app modules, use the crate::app::
prefix:
#![allow(unused)] fn main() { // CORRECT use crate::app::repository::UserRepository; use crate::app::services::UserService; // INCORRECT use crate::repository::UserRepository; // Missing app prefix use crate::services::UserService; // Missing app prefix }
Importing Core Functionality in App Modules
When app modules need to import core functionality, use the crate::core::
prefix:
#![allow(unused)] fn main() { // CORRECT use crate::core::error::{AppError, Result}; use crate::core::config::AppConfig; // INCORRECT use crate::error::{AppError, Result}; // Should use core prefix use crate::config::AppConfig; // Should use core prefix }
Generated API Imports
For generated API code, use the crate::generated_apis
prefix:
#![allow(unused)] fn main() { // CORRECT use crate::generated_apis::petstore_api::models::Pet; // INCORRECT use crate::target::generated::petstore_api::models::Pet; // Don't use target path directly }
Library Exports
In the root lib.rs
file, expose only what is necessary for consumers:
#![allow(unused)] fn main() { // Public exports from core pub use crate::core::router; pub use crate::core::cache; pub use crate::core::config; // Public exports from app pub use crate::app::api; pub use crate::app::services; }
Automated Checking
We have implemented a script that can check and fix import patterns across the codebase:
./.devtools/scripts/fix-imports-naming.sh
This script automatically updates imports in core modules to use the crate::core::
prefix and identifies potential issues in app modules.
Benefits of Consistent Import Patterns
- Clarity - Clearly distinguishes between core and app components
- Maintainability - Makes code easier to maintain as the project evolves
- Refactoring - Simplifies refactoring efforts
- Onboarding - Makes it easier for new developers to understand the codebase structure
Related Documents
- API Standards - API design guidelines
- Error Handling - Error handling patterns
Logging Service Pattern
This document describes the Logging Service Pattern implemented in the Navius framework, which provides a clean abstraction over logging functionality through a provider-based approach.
Overview
The Logging Service Pattern follows these key principles:
- Separation of Interface and Implementation: Logging operations are defined by a core interface, with multiple implementations provided via the provider pattern.
- Pluggable Providers: Different logging implementations can be swapped in and out based on configuration.
- Structured Logging: All logging is done with structured data rather than raw strings, making logs more searchable and meaningful.
- Context Propagation: Logging context can be inherited across components via child loggers.
- Configuration-driven: Behavior is controlled through configuration rather than code changes.
Core Components
The pattern consists of the following components:
LoggingOperations Interface
The LoggingOperations
trait defines the core functionality exposed to application code:
#![allow(unused)] fn main() { pub trait LoggingOperations: Send + Sync + 'static { fn log(&self, level: LogLevel, info: LogInfo) -> Result<(), LoggingError>; fn log_structured(&self, record: StructuredLog) -> Result<(), LoggingError>; // Additional methods... } }
LoggingProvider Interface
The LoggingProvider
trait defines how logging implementations are created:
#![allow(unused)] fn main() { pub trait LoggingProvider: Send + Sync + 'static { async fn create_logger(&self, config: &LoggingConfig) -> Result<Arc<dyn LoggingOperations>, LoggingError>; fn name(&self) -> &'static str; fn supports(&self, config: &LoggingConfig) -> bool; } }
LoggingProviderRegistry
A registry that manages available providers and creates loggers based on configuration:
#![allow(unused)] fn main() { pub struct LoggingProviderRegistry { providers: Mutex<HashMap<String, Arc<dyn LoggingProvider>>>, default_provider_name: Mutex<String>, } }
Implementation Steps
To implement this pattern in your own code:
- Define the logging operations interface
- Create a provider interface for instantiating loggers
- Implement a provider registry for managing available providers
- Create concrete implementations of the logging operations
- Implement the provider for each concrete implementation
Benefits
- Testability: Logging can be easily mocked for testing purposes
- Extensibility: New logging backends can be added without changing application code
- Consistency: All logs follow the same structured format
- Configuration: Behavior can be changed through configuration
- Runtime Selection: Logging implementation can be selected at runtime
Example Usage
See the Logging Service Example for detailed usage examples.
Related Patterns
- Provider Pattern: For creating implementations of an interface
- Registry Pattern: For managing and accessing providers
- Factory Method Pattern: For creating logger instances
- Decorator Pattern: For adding functionality to loggers (e.g., child loggers)
- Strategy Pattern: For selecting logging implementation at runtime
Recommended Implementation
The Navius framework provides a complete implementation of this pattern in the core::logger
module. This implementation includes:
TracingLoggerProvider
: A provider for tracing-based loggingConsoleLoggerProvider
: A provider for colorized console outputLoggingProviderRegistry
: A registry for managing providers- Factory methods for creating loggers based on configuration
Refer to the framework implementation for a complete reference implementation of this pattern.
Considerations
- Ensure thread safety when implementing loggers
- Consider performance implications, especially for high-volume logging
- Provide extension points for advanced features like filtering and sampling
- Plan for error handling when logging fails
- Consider how to handle buffering and asynchronous logging
title: Repository Pattern description: Implementing the repository pattern for domain entities category: patterns tags:
- patterns
- repository
- entity
- data-access related:
- examples/repository-pattern-example.md
- roadmaps/25-generic-service-implementations.md last_updated: March 27, 2025 version: 1.0
Repository Pattern
Overview
The repository pattern provides an abstraction layer between the domain model and data access layers. It centralizes data access logic, making it easier to maintain and test application code.
Key Benefits
- Separation of Concerns: Isolates domain logic from data access code
- Testability: Simplifies writing unit tests with mock repositories
- Flexibility: Enables switching storage mechanisms without changing business logic
- Type Safety: Ensures domain objects are handled correctly across the application
- Maintainability: Centralizes data access logic in a consistent pattern
Implementation in Navius
In the Navius framework, the repository pattern is implemented with several key components:
Entity Trait
The Entity
trait defines common properties and behaviors for domain objects:
#![allow(unused)] fn main() { pub trait Entity: Clone + Debug + Serialize + Send + Sync + 'static { /// The ID type for this entity type Id: EntityId; /// Get the entity's unique identifier fn id(&self) -> &Self::Id; /// Get the collection/table name this entity belongs to fn collection_name() -> String; /// Validates that the entity data is valid fn validate(&self) -> Result<(), ServiceError> { Ok(()) } } }
Repository Trait
The Repository<E>
trait defines standard CRUD operations for entities:
#![allow(unused)] fn main() { #[async_trait] pub trait Repository<E: Entity>: Send + Sync + 'static { /// Find an entity by its ID async fn find_by_id(&self, id: &E::Id) -> Result<Option<E>, ServiceError>; /// Find all entities in the collection async fn find_all(&self) -> Result<Vec<E>, ServiceError>; /// Save an entity (create or update) async fn save(&self, entity: &E) -> Result<E, ServiceError>; /// Delete an entity by its ID async fn delete(&self, id: &E::Id) -> Result<bool, ServiceError>; /// Count entities in the collection async fn count(&self) -> Result<usize, ServiceError>; /// Check if an entity with the given ID exists async fn exists(&self, id: &E::Id) -> Result<bool, ServiceError> { Ok(self.find_by_id(id).await?.is_some()) } } }
Repository Provider
The RepositoryProvider
trait enables creating repositories for different entity types:
#![allow(unused)] fn main() { #[async_trait] pub trait RepositoryProvider: Send + Sync + 'static { /// Create a repository for the given entity type async fn create_repository<E: Entity>( &self, config: RepositoryConfig, ) -> Result<Box<dyn Repository<E>>, ServiceError>; /// Check if this provider supports the given repository configuration fn supports(&self, config: &RepositoryConfig) -> bool; } }
Repository Service
The RepositoryService
manages repository creation and configuration:
#![allow(unused)] fn main() { pub struct RepositoryService { providers: Arc<RwLock<HashMap<String, Box<dyn Any + Send + Sync>>>>, configs: Arc<RwLock<HashMap<String, RepositoryConfig>>>, default_provider: String, } }
Usage Examples
Defining an Entity
#![allow(unused)] fn main() { #[derive(Debug, Clone, Serialize, Deserialize, Validate)] pub struct User { pub id: Uuid, pub username: String, pub email: String, pub active: bool, } impl Entity for User { type Id = Uuid; fn id(&self) -> &Self::Id { &self.id } fn collection_name() -> String { "users".to_string() } fn validate(&self) -> Result<(), ServiceError> { // Validation logic... Ok(()) } } }
Creating and Using a Repository
#![allow(unused)] fn main() { // Create a repository service let repo_service = RepositoryService::new(); // Register a repository provider repo_service.register_provider("memory", InMemoryRepositoryProvider::new()).await?; // Create a repository for User entities let config = RepositoryConfig { provider: "memory".to_string(), ..Default::default() }; let user_repo = repo_service.create_repository::<User>(config).await?; // Use the repository let user = User::new("username", "[email protected]", "Display Name"); let saved_user = user_repo.save(&user).await?; let found_user = user_repo.find_by_id(&saved_user.id).await?; }
Using the Generic Repository
The GenericRepository<E>
provides a simplified facade for repositories:
#![allow(unused)] fn main() { // Create a generic repository let user_repo = GenericRepository::<User>::with_service(&repo_service).await?; // Use the generic repository let user = User::new("username", "[email protected]", "Display Name"); let saved_user = user_repo.save(&user).await?; }
Creating Custom Repository Methods
Create a custom repository with specialized query methods:
#![allow(unused)] fn main() { pub struct UserRepository { inner: Arc<dyn Repository<User>>, } impl UserRepository { pub async fn find_by_username(&self, username: &str) -> Result<Option<User>, ServiceError> { let all_users = self.inner.find_all().await?; Ok(all_users.into_iter().find(|u| u.username == username)) } // Implement other custom methods... } // Delegate standard operations to the inner repository #[async_trait] impl Repository<User> for UserRepository { async fn find_by_id(&self, id: &Uuid) -> Result<Option<User>, ServiceError> { self.inner.find_by_id(id).await } // Implement other required methods... } }
Best Practices
- Entity Validation: Implement thorough validation in the
validate()
method - Custom Repositories: Create specialized repositories for complex query needs
- Exception Handling: Use the
ServiceError
for consistent error handling - Type Safety: Use the proper entity types and ID types throughout
- Test Coverage: Create comprehensive tests for repositories
- Immutability: Treat entities as immutable objects when possible
- Transaction Support: Add transaction support for repository operations when needed
Related Resources
See Also
title: "Service Registration Pattern" description: "Design and implementation of the service registration pattern in Navius" category: patterns tags:
- patterns
- service
- dependency-injection
- architecture related:
- reference/patterns/repository-pattern.md
- examples/dependency-injection-example.md last_updated: March 27, 2025 version: 1.0
Service Registration Pattern
Overview
The Service Registration Pattern in Navius provides a centralized mechanism for registering, retrieving, and managing services throughout an application. This pattern facilitates dependency injection, promotes loose coupling between components, and improves testability.
Problem Statement
Modern applications often consist of multiple interdependent services that need to work together. This creates several challenges:
- Service Discovery: How do components locate the services they depend on?
- Lifecycle Management: How are service instances created, shared, and potentially disposed?
- Dependency Resolution: How are dependencies between services managed?
- Configuration Injection: How are services configured based on application settings?
- Testability: How can services be easily mocked or replaced in tests?
Solution: Service Registration Pattern
The Service Registration Pattern in Navius addresses these challenges through a centralized registry that manages service instances and their dependencies:
- ServiceRegistry: A central container for all services
- Type-Based Lookup: Services are registered and retrieved by their type
- Dependency Injection: Services declare their dependencies explicitly
- Lifecycle Management: The registry manages service instantiation and sharing
Pattern Structure
βββββββββββββββββββββ
β ServiceRegistry β
βββββββββββ¬ββββββββββ
β
β contains
βΌ
βββββββββββββββββββββ depends on βββββββββββββββββββββ
β ServiceA βββββββββββββββββββββββ ServiceB β
βββββββββββββββββββββ βββββββββββββββββββββ
β² β²
β β
β implements β implements
β β
βββββββββββββββββββββ βββββββββββββββββββββ
β ServiceATrait β β ServiceBTrait β
βββββββββββββββββββββ βββββββββββββββββββββ
Implementation
Core Service Registry
#![allow(unused)] fn main() { use std::any::{Any, TypeId}; use std::collections::HashMap; use std::sync::{Arc, RwLock}; use crate::core::error::AppError; pub struct ServiceRegistry { services: RwLock<HashMap<TypeId, Box<dyn Any + Send + Sync>>>, } impl ServiceRegistry { pub fn new() -> Self { Self { services: RwLock::new(HashMap::new()), } } pub fn register<T: 'static + Send + Sync>(&self, service: T) -> Result<(), AppError> { let mut services = self.services.write().map_err(|_| { AppError::internal_server_error("Failed to acquire write lock on service registry") })?; let type_id = TypeId::of::<T>(); services.insert(type_id, Box::new(service)); Ok(()) } pub fn get<T: 'static + Clone + Send + Sync>(&self) -> Result<Arc<T>, AppError> { let services = self.services.read().map_err(|_| { AppError::internal_server_error("Failed to acquire read lock on service registry") })?; let type_id = TypeId::of::<T>(); match services.get(&type_id) { Some(service) => { if let Some(service_ref) = service.downcast_ref::<T>() { Ok(Arc::new(service_ref.clone())) } else { Err(AppError::internal_server_error( format!("Service of type {:?} exists but could not be downcast", type_id) )) } }, None => Err(AppError::service_not_found( format!("No service of type {:?} found in registry", type_id) )), } } } }
Service Definition
Services in Navius are defined as structs that implement a specific functionality:
#![allow(unused)] fn main() { pub struct UserService { // Service state config: Arc<AppConfig>, repository: Arc<dyn UserRepository>, } impl UserService { // Constructor that accepts dependencies pub fn new(config: Arc<AppConfig>, repository: Arc<dyn UserRepository>) -> Self { Self { config, repository, } } // Service methods pub async fn get_user(&self, id: &str) -> Result<User, AppError> { self.repository.find_by_id(id).await } pub async fn create_user(&self, user: User) -> Result<User, AppError> { self.repository.save(user).await } } }
Service Registration
Services are registered during application startup:
#![allow(unused)] fn main() { // Create and configure the service registry let registry = Arc::new(ServiceRegistry::new()); // Load configuration let config = load_config()?; // Create dependencies let user_repository = Arc::new(PostgresUserRepository::new(config.clone())); // Create the user service with its dependencies let user_service = UserService::new(config.clone(), user_repository.clone()); // Register the service in the registry registry.register(user_service)?; }
Service Retrieval
Services are retrieved from the registry when needed:
#![allow(unused)] fn main() { async fn handle_get_user( State(registry): State<Arc<ServiceRegistry>>, Path(id): Path<String>, ) -> Result<Json<User>, AppError> { // Get the service from the registry let user_service = registry.get::<UserService>()?; // Use the service let user = user_service.get_user(&id).await?; Ok(Json(user)) } }
Benefits
- Centralized Service Management: Single point of access for all services
- Lifecycle Control: Registry controls how services are instantiated and shared
- Loose Coupling: Components depend on interfaces, not implementations
- Testability: Services can be easily mocked or replaced in tests
- Configuration Injection: Configuration is consistently provided to services
- Type Safety: Type-based lookup ensures services are properly typed
Advanced Techniques
Trait-Based Registration
For greater flexibility, services can be registered based on traits they implement:
#![allow(unused)] fn main() { // Define a trait pub trait Logger: Send + Sync { fn log(&self, message: &str); } // Implement the trait pub struct ConsoleLogger; impl Logger for ConsoleLogger { fn log(&self, message: &str) { println!("LOG: {}", message); } } // Register based on trait registry.register_trait::<dyn Logger, _>(ConsoleLogger)?; // Retrieve based on trait let logger = registry.get_trait::<dyn Logger>()?; logger.log("Hello, world!"); }
Scoped Service Registration
For services with different lifetimes:
#![allow(unused)] fn main() { // Singleton scope (default) registry.register::<UserService>(user_service)?; // Request scope registry.register_scoped::<RequestContext>(|| RequestContext::new())?; }
Factory Registration
For services that need dynamic creation:
#![allow(unused)] fn main() { // Register a factory registry.register_factory::<Connection>(|| { let conn = create_database_connection(config.database_url); Box::new(conn) })?; }
Automatic Dependency Resolution
For more advanced dependency injection:
#![allow(unused)] fn main() { // Register components registry.register::<Config>(config)?; registry.register::<DatabasePool>(pool)?; // Automatically resolve and create UserService with its dependencies let user_service = registry.resolve::<UserService>()?; }
Implementation Considerations
- Thread Safety: All services and the registry itself must be thread-safe (Send + Sync)
- Error Handling: Well-defined error types for registration and lookup failures
- Performance: Efficient access to services in high-throughput scenarios
- Memory Management: Proper handling of service lifecycle and cleanup
- Circular Dependencies: Detection and prevention of circular dependencies
Usage Examples
Basic Service Registration
#![allow(unused)] fn main() { // Create registry let registry = Arc::new(ServiceRegistry::new()); // Register services registry.register(UserService::new(config.clone(), user_repo.clone()))?; registry.register(AuthService::new(config.clone(), user_service.clone()))?; // Create router with registry let app = Router::new() .route("/users", get(get_users)) .with_state(registry); }
Testing with Mock Services
#![allow(unused)] fn main() { #[tokio::test] async fn test_user_service() { // Create registry with mock dependencies let registry = Arc::new(ServiceRegistry::new()); let mock_repository = Arc::new(MockUserRepository::new()); // Set up mock expectations mock_repository.expect_find_by_id() .with(eq("user-1")) .returning(|_| Ok(User::new("user-1", "Test User"))); // Register service with mock dependency let user_service = UserService::new(Arc::new(AppConfig::default()), mock_repository); registry.register(user_service)?; // Create handler with registry let handler = get_user_handler(registry); // Test the handler let response = handler(Path("user-1".to_string())).await; // Verify response assert!(response.is_ok()); let user = response.unwrap().0; assert_eq!(user.id, "user-1"); assert_eq!(user.name, "Test User"); } }
Related Patterns
- Dependency Injection Pattern: Service Registration is a form of dependency injection
- Factory Pattern: Used for creating service instances
- Strategy Pattern: Services often implement different strategies
- Singleton Pattern: Services are typically singleton instances
- Repository Pattern: Commonly used with service registration for data access
References
Testing Patterns
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
Standards
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
Naming Conventions
This document outlines the naming conventions used in the Navius project after the project restructuring.
File and Directory Naming
All Rust source files and directories should follow the snake_case convention:
src/
βββ app/
β βββ api/
β β βββ user_api.rs β
Good: snake_case
β βββ services/
β βββ user_service.rs β
Good: snake_case
βββ core/
βββ repository/
β βββ user_repository.rs β
Good: snake_case
βββ utils/
βββ string_utils.rs β
Good: snake_case
// β Bad examples:
userApi.rs
UserService.rs
String-Utils.rs
Module Declarations
Module declarations should match the file names:
#![allow(unused)] fn main() { // In src/core/utils/mod.rs pub mod string_utils; // β Matches file name string_utils.rs pub mod date_format; // β Matches file name date_format.rs // β Bad examples: pub mod StringUtils; pub mod Date_Format; }
Structure and Enum Naming
- Structures and Enums: Use PascalCase (UpperCamelCase)
- Traits: Use PascalCase (UpperCamelCase)
#![allow(unused)] fn main() { // Structures - PascalCase pub struct UserRepository { /* ... */ } pub struct ApiResource { /* ... */ } // Enums - PascalCase pub enum UserRole { Admin, User, Guest, } // Traits - PascalCase pub trait Repository { /* ... */ } pub trait CacheProvider { /* ... */ } }
Function and Method Naming
Functions and methods should use snake_case:
#![allow(unused)] fn main() { // Functions - snake_case pub fn create_user() { /* ... */ } pub fn validate_input() { /* ... */ } // Methods - snake_case impl UserService { pub fn find_by_id(&self, id: &str) { /* ... */ } pub fn update_profile(&self, user: &User) { /* ... */ } } }
Variable and Parameter Naming
Variables and parameters should use snake_case:
#![allow(unused)] fn main() { // Variables and parameters - snake_case let user_id = "123"; let connection_string = "postgres://..."; fn process_request(request_body: &str, user_context: &Context) { /* ... */ } }
Constants and Static Variables
Constants and static variables should use SCREAMING_SNAKE_CASE:
#![allow(unused)] fn main() { // Constants - SCREAMING_SNAKE_CASE const MAX_CONNECTIONS: u32 = 100; const DEFAULT_TIMEOUT_MS: u64 = 5000; // Static variables - SCREAMING_SNAKE_CASE static API_VERSION: &str = "v1"; }
Type Aliases
Type aliases should use PascalCase:
#![allow(unused)] fn main() { // Type aliases - PascalCase type ConnectionPool = Pool<Connection>; type Result<T> = std::result::Result<T, AppError>; }
Consistent Naming Across Files
Related components should have consistent naming:
#![allow(unused)] fn main() { // Related components mod user_repository; // File: user_repository.rs mod user_service; // File: user_service.rs mod user_api; // File: user_api.rs // Structures within files pub struct UserRepository { /* ... */ } // In user_repository.rs pub struct UserService { /* ... */ } // In user_service.rs }
Automated Checking
We've implemented a script to help identify files that don't follow the naming conventions:
./.devtools/scripts/fix-imports-naming.sh
This script identifies files with uppercase characters in their names, which may indicate non-compliance with the snake_case convention.
Benefits of Consistent Naming
- Readability - Makes code more readable and predictable
- Consistency - Ensures all developers follow the same patterns
- Idiomatic - Follows Rust's recommended naming conventions
- Tooling - Better integration with Rust tools and IDE features
Related Documents
- API Standards - API design guidelines
- Error Handling - Error handling patterns
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
Code Style
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
Navius Security Guide
Navius takes security seriously, implementing numerous safeguards at different levels of the stack. This document outlines the security features built into the framework and best practices for secure application development.
Security Features
Navius includes a range of security features out of the box:
π Authentication & Authorization
- JWT Authentication: Built-in support for JSON Web Tokens
- OAuth2: Integration with standard OAuth2 providers
- Microsoft Entra (Azure AD): Enterprise authentication support
- Role-Based Access Control: Fine-grained permission controls
- Scope-Based Authorization: API-level permission enforcement
π‘οΈ Web Security
- HTTPS by Default: Automatic TLS configuration
- CORS Protection: Customizable Cross-Origin Resource Sharing
- Content Security Headers: Protection against XSS and other attacks
- Rate Limiting: Protection against brute force attacks
- Request ID Tracking: Correlation IDs for all requests
π Data Security
- SQL Injection Prevention: Type-safe query building
- Password Hashing: Secure password storage with Argon2
- Data Encryption: Support for data encryption at rest
- Input Validation: Type-safe request validation
- Output Sanitization: Prevention of data leakage
Pre-commit Hook for Sensitive Data Detection
Navius includes a pre-commit hook that scans staged files for sensitive data like API keys, secrets, and database credentials to prevent accidental commits of confidential information.
Automatic Setup
The hook is automatically set up when you run ./run_dev.sh
for the first time. If you want to skip this automatic setup, use the --no-hooks
flag:
./run_dev.sh --no-hooks
Manual Setup
To manually set up the pre-commit hook:
./scripts/setup-hooks.sh
What the Hook Detects
The pre-commit hook scans for:
- API keys and tokens
- AWS access keys
- Private keys (SSH, RSA, etc.)
- Database connection strings
- Passwords and secrets
- Environment variables containing sensitive data
How it Works
When you attempt to commit, the hook:
- Scans all staged files for sensitive patterns
- Blocks commits containing detected sensitive data
- Shows detailed information about what was detected and where
- Provides guidance on how to fix the issues
Bypassing the Hook
In rare cases, you may need to bypass the hook:
git commit --no-verify
β οΈ Warning: Only bypass the hook when absolutely necessary and ensure no sensitive data is being committed.
Customizing Sensitive Data Patterns
To customize the sensitive data patterns, edit scripts/pre-commit.sh
and modify the pattern matching rules.
Security Best Practices
API Security
- Always validate input: Use Rust's type system to enforce validation
- Apply the principle of least privilege: Limit access to what's necessary
- Use middleware for cross-cutting concerns: Authentication, rate limiting, etc.
- Log security events: Track authentication attempts, permission changes, etc.
Database Security
- Use parameterized queries: Never concatenate SQL strings
- Limit database permissions: Use a database user with minimal permissions
- Encrypt sensitive data: Hash passwords, encrypt personal information
- Regular backups: Ensure data can be recovered in case of a breach
Configuration Security
- Never commit secrets: Use environment variables or secret management
- Separate configuration from code: Use the layered configuration approach
- Different configs per environment: Maintain separate configuration files
- Environment validation: Validate production environments for security settings
Security Testing
Navius includes tooling for security testing:
- Dependency Scanning: Regular checks for vulnerable dependencies
- Static Analysis: Code scanning for security issues
- Penetration Testing: Tools for API security testing
- OWASP Compliance: Checks against OWASP Top 10 vulnerabilities
Rust Security Advantages
Rust's inherent security features provide additional protection:
- Memory Safety: No buffer overflows, use-after-free, or null pointer dereferences
- Type Safety: Strong type system prevents type confusion errors
- Immutability by Default: Reduces the attack surface for data corruption
- No Garbage Collection: Predictable resource usage prevents certain DoS attacks
- Safe Concurrency: Thread safety guaranteed by the compiler
Security Updates
Navius maintains a regular security update schedule:
- Dependency Updates: Regular updates to dependencies
- Security Patches: Immediate patches for critical vulnerabilities
- Security Advisories: Notifications for important security information
Security Incident Response
In case of a security incident:
- Report: [email protected]
- Response Time: We aim to acknowledge reports within 24 hours
- Disclosure: We follow responsible disclosure practices
Compliance
Navius can be used as part of a compliant application architecture for:
- GDPR: Data protection features
- HIPAA: Healthcare data security
- PCI DSS: Payment card information security
- SOC 2: Security, availability, and confidentiality
π Note: While Navius provides the building blocks for compliant applications, full compliance depends on how you use the framework and your overall application architecture.
Related Documents
- API Standards - API design guidelines
- Error Handling - Error handling patterns
title: Documentation Standards description: Standards and guidelines for creating and maintaining Navius documentation category: reference tags:
- documentation
- standards
- guidelines
- style-guide related:
- ../../98_roadmaps/30_documentation-reorganization-roadmap.md
- ../../98_roadmaps/30_documentation-reorganization-instructions.md last_updated: March 27, 2025 version: 1.0
Documentation Standards
IMPORTANT: The primary source of truth for documentation standards and structure is maintained in:
This document provides a high-level overview and redirects to the authoritative sources.
Overview
This document serves as a reference point for the documentation standards used in the Navius project. Rather than duplicating content, it directs developers and contributors to the authoritative sources.
Quick Reference Checklist
Use this checklist to ensure your documentation meets our standards:
-
Metadata
- YAML frontmatter with required fields (title, description, category, last_updated)
- Relevant tags added
- Related documents linked
-
Structure
- Title as single H1 heading
- All required sections included based on document type
- Logical heading hierarchy (no skipped levels)
- "Related Documents" section included
-
Formatting
- Code blocks have language specifiers
- Lists properly formatted and indented
- Tables are properly aligned
- Images have descriptive alt text
-
Links
- Internal links use absolute paths
- External links use full URLs
- No broken links
-
Accessibility
- Color isn't the only way to convey information
- Images have meaningful alt text
- Tables include proper headers
- Content is organized for screen readers
-
Diagrams
- Complex concepts visualized with Mermaid diagrams
- Diagrams have text alternatives
-
Validation
- Documentation passes all validation tools
Authoritative Documentation
Primary Sources
-
Documentation Reorganization Roadmap
- Defines the overall documentation structure
- Details the validation tools and their integration
- Establishes success criteria and metrics
- Outlines the implementation phases
-
Documentation Reorganization Instructions
- Provides detailed implementation guidelines
- Defines document templates and section requirements
- Includes comprehensive migration processes
- Documents the validation tools and their options
Supplementary Resources
- Documentation Testing Tools
- Documents the tools available for validation
- Provides usage examples
- Explains integration with CI/CD pipelines
Key Standards Summary
For quick reference, here are the key standards that all documentation should follow:
-
Required Metadata
- All documents must have a YAML frontmatter with title, description, category, and last_updated fields
- Related documents and tags are strongly recommended
-
Required Sections by Document Type
- All documents must have document-type-specific sections as defined in the reorganization instructions
- Getting Started documents: Overview, Prerequisites, Installation, Usage, Troubleshooting, Related Documents
- Guides: Overview, Prerequisites, Usage, Configuration, Examples, Troubleshooting, Related Documents
- Reference: Overview, Configuration, Examples, Implementation Details, Related Documents
- Examples: Overview, Prerequisites, Usage, Related Documents
- Contributing: Overview, Prerequisites, Related Documents
- Architecture: Overview, Implementation Details, Related Documents
- Roadmaps: Overview, Current State, Target State, Implementation Phases, Success Criteria, Related Documents
- At minimum, all documents should have "Overview" and "Related Documents" sections
-
Document Structure
- Documents must follow a consistent heading structure
- Code examples must use appropriate syntax highlighting
- Internal links must use absolute paths from the project root
- Implementation Details sections should include Mermaid diagrams where appropriate
-
Validation
- All documentation should pass the validation provided by the testing tools
- Use the add_sections.sh script to ensure consistent document structure
- Use the fix_frontmatter.sh script to validate metadata
- Use the fix_links.sh script to validate links
Implementation Tools
The .devtools/scripts/doc-overhaul/
directory contains tools for implementing and validating these standards:
- generate_report.sh: Comprehensive quality reports with health scores and recommendations
- add_sections.sh: Adding standardized sections based on document type with support for both directory structures
- fix_frontmatter.sh: Validating and fixing document metadata including automatic date handling
- fix_links.sh: Verifying and repairing document links with intelligent path suggestions
- comprehensive_test.sh: In-depth documentation analysis with detailed quality metrics
- improve_docs.sh: Interactive documentation improvement workflow that:
- Guides users through step-by-step document improvement processes
- Provides batch operations for fixing common documentation issues
- Calculates and reports readability metrics with improvement suggestions
- Automatically updates frontmatter metadata including last_updated field
- Generates quality reports with visualization options
- Supports both old and new directory structures
- Integrates all other documentation tools in a streamlined workflow
New Validation Tools
As part of the documentation reorganization, we've developed a set of specialized validation tools in the 11newdocs11/98_roadmaps/doc-reorg-tools/
directory:
- code-example-extractor.sh: Extracts Rust code examples from Markdown files for verification
- code-example-verifier.sh: Validates Rust code examples for syntactic correctness and API compatibility
- code-example-fixer.sh: Automatically fixes common issues in code examples
- link-analyzer.sh: Checks internal links for correctness in the new directory structure
- document-validator.sh: Validates document structure, frontmatter, and content quality
- run-consolidated-validation.sh: Integrated validation script that runs all tools and generates a consolidated report
Simplified Validation Tools
To address challenges with the more complex validation tools, we've also created simplified alternatives that are easier to use and more reliable:
- simple-validate.sh: Validates a single document's frontmatter, structure, code examples, and links
- simple-batch-validate.sh: Runs validation on multiple documents and generates a consolidated report
- generate-summary.sh: Creates an executive summary of validation results with actionable recommendations
Automated Fix Tools
To efficiently address common documentation issues identified during validation, we've developed automated fix tools:
- fix-frontmatter.sh: Checks for missing frontmatter and adds a basic template if missing
- add-sections.sh: Checks for missing required sections and adds them based on document type
- code-example-tagger.sh: Identifies untagged code blocks and adds appropriate language tags
These tools provide basic validation and fix capabilities with minimal dependencies and are recommended for initial validation passes. For detailed instructions on using these tools, see the Documentation Validation Tools README.
Tiered Validation Approach
To efficiently validate the large number of documents, we implement a three-tier validation approach:
-
Tier 1 (100% Validation)
- Getting started guides
- Installation instructions
- Core API references
- Frequently accessed examples
-
Tier 2 (50% Sample Validation)
- Secondary examples
- Feature-specific guides
- Specialized patterns
- Contributing guidelines
-
Tier 3 (Spot Checking)
- Supplementary materials
- Advanced topics
- Historical roadmaps
- Specialized configurations
For detailed instructions on using these validation tools, see the Consolidated Validation Script Usage Guide.
Writing Style Guide
Consistent writing style is as important as consistent formatting. Follow these guidelines for clear, accessible content:
Voice and Tone
- Use a clear, direct tone that focuses on helping the reader
- Write in the present tense ("the function returns" not "the function will return")
- Use active voice over passive voice when possible
- Address the reader as "you" rather than "we" or "the user"
- Be conversational but professional (avoid slang but don't be overly formal)
Content Structure
- Start with the most important information first
- Use short paragraphs (3-5 sentences maximum)
- Include a brief overview at the beginning of each document
- Provide concrete examples for complex concepts
- Use numbered lists for sequential steps and bulleted lists for unordered items
Language Clarity
- Define technical terms on first use
- Use consistent terminology throughout documentation
- Avoid jargon or abbreviations without explanation
- Be specific and precise rather than vague
- Keep sentences concise (aim for 15-20 words average)
Before and After Examples
β Instead of this | β Write this |
---|---|
The utilization of the configuration object necessitates initialization prior to implementation in your codebase, which will then subsequently enable the functionality required for operation. |
Initialize the configuration object before using it in your code. This enables the core functionality. |
We've enhanced the previous implementation with a variety of performance improvements that users will find quite advantageous when deploying in their environments. |
This version improves performance by:
|
Markdown Style Guide
The following style guide provides supplementary formatting guidance for Markdown documents:
Document Structure
Metadata Header
Every document must include a YAML metadata header as specified in the reorganization instructions:
---
title: Document Title
description: Brief description of the document
category: guides | reference | roadmaps | contributing
tags:
- tag1
- tag2
related:
- path/to/related/doc1.md
- path/to/related/doc2.md
last_updated: March 27, 2025
version: 1.0
---
Heading Structure
- Use a single
#
for the document title - Start with
##
for main sections - Use increasing heading levels for subsections
- Don't skip heading levels (e.g., don't go from
##
to####
) - Keep headings concise and descriptive
Text Formatting
Paragraphs
- Use a single blank line between paragraphs
- Keep paragraphs focused on a single topic
- Aim for 3-5 sentences per paragraph maximum
Emphasis
- Use bold (
**text**
) for emphasis or UI elements - Use italic (
*text*
) for introduced terms or parameters - Use
code
(`code`
) for code snippets, commands, or filenames - Avoid using ALL CAPS for emphasis
Lists
- Use unordered lists (
-
) for items without specific order - Use ordered lists (
1.
) for sequential steps or prioritized items - Maintain consistent indentation for nested lists
- Include a blank line before and after lists
- Item 1
- Item 2
- Nested item 1
- Nested item 2
- Item 3
1. First step
2. Second step
1. Substep 1
2. Substep 2
3. Third step
Code Elements
Inline Code
Use backticks for inline code:
The `Config` struct contains the application configuration.
Code Blocks
Use triple backticks with a language specifier:
```rust
fn main() {
println!("Hello, world!");
}
```
Command Line Examples
For command line examples, use bash
or shell
as the language:
```bash
cargo run --release
```
Links and References
Internal Links
Use absolute paths from the project root for internal links:
See the [Installation Guide](../01_getting_started/installation.md) for more information.
External Links
Use complete URLs for external links:
Visit the [Rust website](https://www.rust-lang.org/) for more information.
Images
Include images with alt text:

Tables
Use tables for structured data:
| Column 1 | Column 2 | Column 3 |
|----------|----------|----------|
| Cell 1 | Cell 2 | Cell 3 |
| Cell 4 | Cell 5 | Cell 6 |
Notes and Callouts
Use blockquotes with prefixes for notes and warnings:
> **Note:** This is important information.
> **Warning:** This is a critical warning.
Diagrams and Visualizations
For complex concepts, use diagrams to enhance understanding. We standardize on Mermaid for creating diagrams within Markdown.
Using Mermaid Diagrams
Mermaid diagrams are preferred because they:
- Can be version-controlled as code
- Render correctly in GitHub/GitLab
- Are accessible with proper text alternatives
- Can be easily updated without graphic design tools
Include Mermaid diagrams using code blocks with the mermaid
language specifier:
```mermaid
graph TD
A[Start] --> B{Decision}
B -->|Yes| C[Action 1]
B -->|No| D[Action 2]
C --> E[End]
D --> E
```
Which renders as:
graph TD A[Start] --> B{Decision} B -->|Yes| C[Action 1] B -->|No| D[Action 2] C --> E[End] D --> E
Common Diagram Types
Flowcharts (Process Diagrams)
Use for depicting processes, workflows, or decision trees:
```mermaid
flowchart LR
A[Input] --> B(Process)
B --> C{Decision}
C -->|Yes| D[Output 1]
C -->|No| E[Output 2]
```
Sequence Diagrams
Use for depicting interactions between components:
```mermaid
sequenceDiagram
participant Client
participant API
participant Database
Client->>API: Request data
API->>Database: Query
Database-->>API: Return results
API-->>Client: Send response
```
Class Diagrams
Use for depicting relationships between classes/structs:
```mermaid
classDiagram
class User {
+String username
+String email
+login()
+logout()
}
class Admin {
+manageUsers()
}
User <|-- Admin
```
Entity-Relationship Diagrams
Use for database schema representations:
```mermaid
erDiagram
CUSTOMER ||--o{ ORDER : places
ORDER ||--|{ LINE-ITEM : contains
CUSTOMER }|..|{ DELIVERY-ADDRESS : uses
```
Diagram Best Practices
- Always provide a text description before or after complex diagrams
- Keep diagrams focused on one concept
- Use consistent styling across diagrams
- Label edges and nodes clearly
- Include a legend for complex notation
- Test your diagrams in dark and light mode
Accessibility Guidelines
Creating accessible documentation ensures that all users, including those with disabilities, can effectively use our documentation.
Text Alternatives for Non-Text Content
- Always provide alt text for images that describes the content and purpose
- For complex images or diagrams, provide a text description that explains what the image conveys
- For decorative images that don't convey meaning, use empty alt text (
alt=""
)
Example:
<!-- β Poor alt text -->

<!-- β
Good alt text -->

Headings and Structure
- Use headings to organize content in a logical hierarchy
- Don't skip heading levels (e.g., don't go from
##
to####
) - Make sure headings accurately describe the content that follows
Links
- Use descriptive link text that indicates where the link leads
- Avoid using "click here" or "read more" as link text
- Ensure all links are distinguishable (not just by color)
Example:
<!-- β Poor link text -->
For more information about accessibility, [click here](../04_guides/accessibility.md).
<!-- β
Good link text -->
For more information, read the [Accessibility Guidelines](../04_guides/accessibility.md).
Tables
- Use tables for tabular data, not for layout
- Include table headers using
|---|
syntax - Keep tables simple and avoid merging cells
- Provide a caption or description before complex tables
Color and Contrast
- Never use color as the only way to convey information
- Ensure sufficient contrast between text and background
- Test documentation in both light and dark modes
Mobile-Friendly Content
With users increasingly accessing documentation on mobile devices, ensure content works well on small screens:
- Use responsive tables or consider alternatives for wide tables
- Keep code examples concise and use line breaks to prevent horizontal scrolling
- Optimize images for mobile viewing (consider progressive loading)
- Test documentation on mobile devices or emulators
- Use shorter paragraphs for better mobile readability
- Prefer vertical layouts over horizontal when possible
Internationalization Considerations
While our primary documentation is in English, following these practices helps with future translation and international accessibility:
- Use simple, clear language that's easier to translate
- Avoid idioms, colloquialisms, and culture-specific references
- Use ISO standard date formats (YYYY-MM-DD)
- Keep sentences relatively short to aid translation
- Use visuals to complement text where appropriate
- Provide context for potentially ambiguous terms
- Structure content with clear headers for easier navigation
- Use Unicode characters rather than ASCII art
Document Type Templates
The following templates provide examples for different document types. For the definitive list of required sections by document type, refer to the Documentation Reorganization Instructions.
Index Documents (README.md)
# Guide Documentation
This directory contains guides for using and developing with the Navius framework.
## Document List
- [Development Workflow](development-workflow.md) - Guide to the development process
- [Testing Guide](testing.md) - How to write and run tests
- [Authentication](authentication.md) - Setting up and using authentication
## Key Documents
If you're new to development, start with:
- [Development Workflow](development-workflow.md)
- [Project Structure](../05_reference/project-structure.md)
## Getting Started
For new developers, we recommend following these guides in order:
1. [Installation Guide](../01_getting_started/installation.md)
2. [Development Setup](../01_getting_started/development-setup.md)
3. [Development Workflow](development-workflow.md)
Guide Documents
# Authentication Guide
## Overview
This guide explains how to implement authentication in your Navius application.
## Prerequisites
- Basic understanding of Rust and Axum
- Navius development environment set up
- Access to Microsoft Entra (formerly Azure AD)
## Step-by-step Instructions
1. **Configure Environment Variables**
```shell
export ENTRA_CLIENT_ID=your-client-id
export ENTRA_TENANT_ID=your-tenant-id
- Add Authentication Middleware
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
Configuration Standards
This document outlines the configuration standards and patterns used throughout the Navius framework, providing a reference for consistent configuration implementation.
Configuration File Structure
File Locations
Navius applications use a standardized configuration file structure:
config/
βββ default.yaml # Base configuration (required)
βββ development.yaml # Development environment overrides
βββ test.yaml # Testing environment overrides
βββ production.yaml # Production environment overrides
File Naming Convention
default.yaml
: Base configuration that applies to all environments{environment}.yaml
: Environment-specific overrides (development, test, production)- Custom environments can be defined with
{custom-name}.yaml
YAML Format Standards
Nesting and Organization
Configuration should be organized in a hierarchical structure:
# Top-level application configuration
app:
name: "Navius Application"
version: "1.0.0"
description: "A Navius framework application"
# Server configuration
server:
host: "127.0.0.1"
port: 3000
timeout_seconds: 30
# Feature flags
features:
advanced_metrics: true
experimental_api: false
# Subsystem configurations
database:
url: "postgres://localhost:5432/navius"
max_connections: 10
timeout_seconds: 5
logging:
level: "info"
format: "json"
file: "/var/log/navius.log"
Naming Conventions
- Use snake_case for all configuration keys
- Group related settings under common prefixes
- Use descriptive, clear names
- Avoid abbreviations unless widely understood
Value Types
- Strings: Use quotes (
"value"
) - Numbers: No quotes (
42
,3.14
) - Booleans: Use
true
orfalse
(lowercase) - Arrays: Use
[item1, item2]
or multiline list format - Maps: Use nested format with indentation
Environment Variables
Environment Variable Mapping
Configuration values can be overridden via environment variables using this pattern:
APP__NAME="Overridden App Name"
SERVER__PORT=8080
FEATURES__ADVANCED_METRICS=false
Rules:
- Double underscore (
__
) separates configuration keys - Keys are case-insensitive
- Environment variables take precedence over file configuration
Variable Types
- Strings: Use as-is
- Numbers: Parsed from string representation
- Booleans:
true
,false
,1
,0
,yes
,no
,y
,n
(case-insensitive) - Arrays: Comma-separated values (
val1,val2,val3
) - Objects: JSON format (
{"key": "value"}
)
Secrets Management
Sensitive Data
Never store secrets in configuration files. Use these approaches instead:
-
Environment Variables: For most secrets
DATABASE__PASSWORD="secure-password" JWT__SECRET_KEY="jwt-signing-key"
-
External Secret Managers: For advanced scenarios
secrets: provider: "vault" # or "aws-secrets", "azure-keyvault" url: "https://vault.example.com" path: "secret/navius"
-
File References: For certificate files
tls: cert_file: "/path/to/cert.pem" key_file: "/path/to/key.pem"
Secret Configuration Patterns
Use this format for secret references:
database:
username: "db_user"
password: "${DB_PASSWORD}" # Resolved from environment
jwt:
secret: "${JWT_SECRET}" # Resolved from environment
Configuration Loading
Load Order
Configuration is loaded in this order, with later steps overriding earlier ones:
- Default configuration file (
default.yaml
) - Environment-specific file (based on
ENVIRONMENT
variable) - Environment variables
- Command-line arguments
Command-Line Arguments
Command-line arguments follow this format:
./my-app --server.port=8080 --logging.level=debug
Validation
Schema Validation
All configuration should be validated against a schema:
#![allow(unused)] fn main() { #[derive(Debug, Deserialize, Validate)] pub struct ServerConfig { #[validate(required)] pub host: String, #[validate(range(min = 1, max = 65535))] pub port: u16, #[validate(range(min = 1, max = 300))] pub timeout_seconds: u32, } }
Required vs Optional Settings
Always provide clear documentation about which settings are required vs optional:
# Required settings (no defaults provided by application)
database:
url: "postgres://localhost:5432/navius" # REQUIRED
# Optional settings (defaults provided by application)
server:
port: 3000 # Optional, defaults to 3000 if not specified
host: "127.0.0.1" # Optional, defaults to 127.0.0.1 if not specified
Feature Flags
Feature Configuration
Organize feature flags under a dedicated section:
features:
advanced_metrics: true
experimental_api: false
beta_endpoints: false
cache_enabled: true
Feature-Specific Configuration
Group feature-specific settings under the feature name:
features:
advanced_metrics:
enabled: true
sampling_rate: 0.1
export_interval_seconds: 60
cache:
enabled: true
ttl_seconds: 300
max_entries: 10000
Documentation
Configuration Comments
Include comments in YAML files to document configuration options:
server:
# The host IP address to bind the server to
# Use "0.0.0.0" to bind to all interfaces
host: "127.0.0.1"
# The port number to listen on (1-65535)
port: 3000
# Request timeout in seconds
timeout_seconds: 30
Configuration Reference
Maintain a comprehensive configuration reference:
#![allow(unused)] fn main() { /// Server configuration options /// /// # Examples /// /// ```yaml /// server: /// host: "127.0.0.1" /// port: 3000 /// timeout_seconds: 30 /// ``` #[derive(Debug, Deserialize)] pub struct ServerConfig { /// The host address to bind to pub host: String, /// The port to listen on (1-65535) pub port: u16, /// Request timeout in seconds pub timeout_seconds: u32, } }
Default Values
Sensible Defaults
Provide sensible defaults for all optional configuration:
#![allow(unused)] fn main() { impl Default for ServerConfig { fn default() -> Self { Self { host: "127.0.0.1".to_string(), port: 3000, timeout_seconds: 30, } } } }
Overriding Defaults
Document how defaults can be overridden:
# Override defaults in environment-specific files
# development.yaml
server:
port: 8080 # Override default port
Configuration Update
Dynamic Configuration
For settings that can be updated at runtime:
# Settings that support runtime updates
dynamic:
logging:
level: "info" # Can be changed at runtime
cache:
ttl_seconds: 300 # Can be changed at runtime
Reload Mechanism
Support configuration reload where appropriate:
#![allow(unused)] fn main() { // Reload configuration from disk config_service.reload().await?; // Subscribe to configuration changes config_service.subscribe(|updated_config| { // React to changes }); }
Integration with Services
Dependency Injection
Inject configuration into services:
#![allow(unused)] fn main() { // Service that uses configuration pub struct UserService { config: Arc<ConfigService>, repository: Arc<UserRepository>, } impl UserService { pub fn new( config: Arc<ConfigService>, repository: Arc<UserRepository>, ) -> Self { Self { config, repository } } pub async fn get_user(&self, id: &str) -> Result<User, Error> { let timeout = self.config.get::<u64>("user_service.timeout_seconds") .unwrap_or(5); self.repository.get_user_with_timeout(id, timeout).await } } }
Best Practices
- Centralized Configuration: Keep configuration in one central location
- Environment Separation: Use separate files for each environment
- Validation: Always validate configuration at startup
- Documentation: Document all configuration options
- Defaults: Provide sensible defaults for all optional settings
- Type Safety: Use strongly-typed configuration objects
- Secrets Management: Never store secrets in configuration files
- Configuration Testing: Test configuration loading and validation
Example Configuration
Complete Example
# Application configuration
app:
name: "Navius Example App"
version: "1.0.0"
description: "An example Navius application"
# Server configuration
server:
host: "127.0.0.1"
port: 3000
timeout_seconds: 30
# Feature flags
features:
advanced_metrics: true
experimental_api: false
# Database configuration
database:
driver: "postgres"
host: "localhost"
port: 5432
name: "navius"
username: "navius_user"
password: "${DB_PASSWORD}" # Set via environment variable
pool:
max_connections: 10
timeout_seconds: 5
idle_timeout_seconds: 300
# Logging configuration
logging:
level: "info"
format: "json"
output:
console: true
file: "/var/log/navius.log"
# Cache configuration
cache:
enabled: true
provider: "redis"
url: "redis://localhost:6379"
ttl_seconds: 300
# Metrics configuration
metrics:
enabled: true
exporter: "prometheus"
endpoint: "/metrics"
interval_seconds: 15
# Health check configuration
health:
enabled: true
endpoint: "/health"
include_details: true
# API configuration
api:
base_path: "/api/v1"
rate_limit:
enabled: true
requests_per_minute: 60
cors:
enabled: true
allowed_origins: ["https://example.com"]
title: "" description: "Reference documentation for Navius " category: "Reference" tags: ["documentation", "reference"] last_updated: "April 3, 2025" version: "1.0"
title: "Navius Framework Roadmaps" description: "Documentation about Navius Framework Roadmaps" category: roadmap tags:
- api
- architecture
- authentication
- aws
- caching
- database
- development
- documentation
- integration
- performance
- security
- testing last_updated: March 27, 2025 version: 1.0
Navius Framework Roadmaps
This directory contains roadmaps for enhancing the Navius framework to match the feature set and developer experience of established enterprise frameworks like Spring Boot.
Quick Links
- Template for Updates - Guidelines for updating roadmaps
- Server Customization System - Foundation priority (96% complete)
- Generic Service Implementations - Top priority (71% complete)
- Testing Framework - Current focus area (35% complete)
- Enhanced Caching - Active development (40% complete)
- Dependency Injection - High priority focus area
- Project Status Dashboard - Overall project status
- Roadmap Instructions - Implementation guides for roadmaps
Active Roadmaps
ID | Roadmap | Status | Priority | Dependencies |
---|---|---|---|---|
17 | Server Customization System | 96% | Critical | None |
16 | Generic Service Implementations | 71% | Critical | None |
03 | Testing Framework | 35% | High | None |
07 | Enhanced Caching | 40% | High | 02, 04 |
01 | Dependency Injection | 0% | High | None |
10 | Developer Experience | 10% | High | None |
05 | Data Validation | 0% | High | None |
12 | Documentation Overhaul | 0% | Medium | None |
11 | Security Features | 0% | High | None |
02 | Database Integration | 0% | High | 01 |
04 | AWS Integration | 0% | Medium | 01 |
06 | Resilience Patterns | 0% | Medium | 04 |
08 | API Versioning | 0% | Low | 05 |
09 | Declarative Features | 0% | Low | 01 |
15 | API Model Management | 0% | High | None |
Completed Roadmaps
ID | Roadmap | Completion Date | Location |
---|---|---|---|
11 | Project Structure Improvements | March 24, 2025 | completed/ |
12 | Project Restructuring | March 24, 2025 | completed/ |
13 | App Directory Completion | March 24, 2025 | completed/ |
14 | Module Relocation Summary | March 24, 2025 | completed/ |
15 | Project Restructuring Summary | March 24, 2025 | completed/ |
Current Implementation Status
Overall Progress
[ββββββββββ] 52% Complete
Component | Progress | Status | Next Milestone |
---|---|---|---|
Server Customization | 96% | π In Progress | CLI Visualization |
Generic Services | 71% | π In Progress | Observability Service Generalization |
Project Structure | 100% | β Complete | N/A |
Testing Framework | 35% | π In Progress | API Resource Testing |
Enhanced Caching | 40% | π In Progress | Cache Monitoring and Metrics |
Dependency Injection | 0% | π Starting | AppState Builder |
Developer Experience | 10% | π In Progress | Local Dev Environment |
Data Validation | 0% | π Starting | Validation Framework |
Security Features | 0% | β³ Not Started | Auth Implementation |
Documentation | 0% | β³ Not Started | Documentation Audit |
Testing Coverage
Module | Coverage | Change | Status |
---|---|---|---|
Core Modules | 98% | +0% | β |
API Resource | 40% | +40% | π |
User Management | 35% | +0% | π |
Authentication | 45% | +0% | π |
Overall | 35% | +29% | π |
Implementation Strategy
Current Sprint Focus (March-April 2025)
-
Testing Framework Enhancement
- Complete API Resource Testing (40% β 80%)
- Implement Core Reliability Component tests
- Add database operation integration tests
- Target: Maintain 98% core coverage
-
Dependency Injection
- Implement AppState builder
- Define service traits
- Add error handling
- Target: Reach 30% completion
-
Developer Experience
- Complete Docker Compose setup
- Implement hot reload
- Add development testing tools
- Target: Reach 40% completion
-
Data Validation
- Define validation framework
- Implement input validation decorators
- Add schema-based validation
- Target: Reach 25% completion
-
Enhanced Caching
- Improve monitoring and metrics
- Implement cache consistency mechanisms
- Optimize cache performance
- Target: Reach 60% completion
-
Documentation Overhaul
- Complete documentation audit
- Define document standards
- Start reorganizing documentation structure
- Target: Reach 25% completion
-
Security Features
- Begin auth implementation
- Define security boundaries
- Implement core security utilities
- Target: Reach 20% completion
Roadmap Dependencies
graph TD DI[01: Dependency Injection] --> DB[02: Database Integration] DI --> AWS[04: AWS Integration] DB --> Cache[07: Enhanced Caching] AWS --> Cache DI --> Decl[09: Declarative Features] Val[05: Data Validation] --> API[08: API Versioning] AWS --> Res[06: Resilience Patterns] Doc[12: Documentation Overhaul] -.-> All[All Roadmaps]
Quality Gates
Every roadmap implementation must pass these gates:
1. Testing Requirements
- 80%+ unit test coverage
- Integration tests for external services
- Performance tests for critical paths
- Security test coverage
2. Documentation Requirements
- API documentation
- Example code
- Architecture decisions
- Security considerations
3. Security Requirements
- Security scan passed
- Auth/authz implemented
- Secure configuration
- Error handling reviewed
4. Performance Requirements
- Load testing complete
- Resource usage analyzed
- Scalability verified
- Monitoring implemented
Progress Tracking
Each roadmap follows our standardized tracking system:
-
Task Status Markers
- Completed
- [~] In Progress
- Not Started
- [-] Abandoned
-
Progress Updates
- Include current system date
- Specific implementation details
- Clear status messages
- No future dates
-
Coverage Tracking
- Use
navius-coverage.json
- Generate HTML reports
- Track weekly baselines
- Monitor critical paths
- Use
Contributing
- Follow the template for all updates
- Use the current system date (
date "+%B %d, %Y"
) - Include specific implementation details
- Update overall progress metrics
- Maintain documentation quality
Roadmap Implementation Instructions
Detailed implementation guides for roadmaps are available in the roadmap-instructions directory. These provide step-by-step guidance, specific prompts, and verification steps for completing roadmap tasks.
Currently available implementation guides:
References
Documentation Roadmaps
- 30_documentation-reorganization-roadmap.md - Strategic plan for restructuring Navius documentation
- 30_documentation-reorganization-instructions.md - Implementation instructions for documentation restructuring
title: "Generic Service Implementations Roadmap" description: "Transforming hardcoded core service implementations into generic interfaces with pluggable providers" category: roadmap tags:
- architecture
- refactoring
- dependency-injection
- services
- generic-programming last_updated: March 27, 2025 version: 1.0
Generic Service Implementations Roadmap
Overview
This roadmap outlines the steps to transform hardcoded service implementations in the core module into generic interfaces with pluggable providers. Following the successful pattern of refactoring our auth system from Entra-specific to a generic OAuth implementation, we'll apply the same approach to other core services that are currently hardcoded.
Current Progress
- Phase 1 (Database Service Generalization): 100% Complete
- Phase 2 (Health Service Generalization): 100% Complete
- Phase 3 (Cache Service Generalization): 100% Complete
- Phase 4 (Collection Model Generalization): 100% Complete
- Phase 5 (Logging Service Generalization): 100% Complete
- Overall Progress: 71% (5/7 phases completed)
Current Status
We've identified several hardcoded implementations in the core that should be made generic:
- Database Service: Currently hardcoded to use InMemoryDatabase β
- Health Service: Hardcoded health indicators β
- Cache Implementation: Specifically tied to Moka cache β
- Database Collection Model: Specific methods for user collection β
- Logging Service: Direct use of tracing crate β
- Database Provider: Only supports in-memory database
Target State
Services in the core module should:
- Be defined through generic traits/interfaces
- Support multiple implementations through providers
- Use dependency injection for wiring
- Allow configuration-based selection of providers
- Support testing through mock implementations
Implementation Progress Tracking
Phase 1: Database Service Generalization
-
Define Database Interface
-
Create
DatabaseInterface
trait to abstract database operations - Define key operations (get, set, delete, query)
- Add generic type parameters for flexible implementation
- Create provider trait for database instantiation
Updated at: March 26, 2025 - Completed implementation of DatabaseOperations and DatabaseProvider traits
-
Create
-
Refactor In-Memory Database
- Make InMemoryDatabase implement the new interface
- Update database service to use the interface
- Create separate implementation module
- Add tests for the implementation
Updated at: March 26, 2025 - Implemented InMemoryDatabase that uses the new interface
-
Implement Configuration System
- Update DatabaseConfig to support provider selection
- Implement provider registry
- Create factory method for database instantiation
- Add configuration validation
Updated at: March 26, 2025 - Created DatabaseProviderRegistry with provider selection and validation
Phase 2: Health Service Generalization
-
Define Health Check Interface
-
Create
HealthIndicator
trait (already exists but needs enhancement) - Add provider system for health indicators
- Create registration mechanism for custom indicators
- Implement discovery mechanism for auto-registration
Updated at: March 26, 2025 - Completed all Health Indicator Interface tasks, including dynamic discovery support with the HealthDiscoveryService
-
Create
-
Refactor Health Indicators
- Move existing indicators to separate modules
- Make all indicators pluggable
- Implement conditional indicators based on config
- Add dynamic health indicator support
Updated at: March 26, 2025 - Completed all Health Indicator refactoring, including dynamic indicator registration and discovery
-
Implement Health Dashboard
- Centralize health data collection
- Add metadata support for indicators
- Implement status aggregation
- Create detailed reporting system
Updated at: March 26, 2025 - Implemented complete Health Dashboard with detailed reporting, history tracking, and dynamic indicator support
Phase 3: Cache Service Generalization
-
Define Cache Interface
-
Create
CacheProvider
trait - Abstract cache operations from implementation
- Support different serialization strategies
- Define eviction policy interface
Updated at: March 26, 2025 - Created comprehensive cache provider interface with support for various eviction policies and serialization strategies
-
Create
-
Refactor Moka Cache Implementation
- Make the existing implementation a provider
- Create separate module for Moka implementation
- Remove direct Moka dependencies from core
- Implement adapter pattern for Moka
Updated at: March 26, 2025 - Replaced direct Moka dependency with a custom implementation that follows the new generic interface
-
Add Alternative Cache Implementation
- Implement simple in-memory cache
- Create Redis cache provider (placeholder)
- Add configuration for selecting providers
- Implement cache provider factory
Updated at: March 26, 2025 - Implemented in-memory cache provider and Redis placeholder with provider factory for cache instantiation
-
Implement Two-Tier Cache Fallback
-
Create
TwoTierCache
implementation - Support fast cache (memory) with slow cache (Redis) fallback
- Add automatic promotion of items from slow to fast cache
- Support configurable TTLs for each cache level
Updated at: March 26, 2025 - Implemented TwoTierCache with fast/slow cache layers, automatic promotion, and configurable TTLs
-
Create
Phase 4: Collection Model Generalization
-
Define Entity Interface
- Create generic entity trait
- Define common CRUD operations
- Implement repository pattern
- Support type-safe collections
Updated at: March 26, 2025 - Completed implementation of Entity and Repository traits, with generic ID type support
-
Refactor User Collection
- Abstract user-specific methods to generic pattern
- Create repository implementations
- Implement type mapping between layers
- Add comprehensive tests
Updated at: March 26, 2025 - Implemented InMemoryRepository and UserService that follows the repository pattern
-
Create Repository Pattern Documentation
- Document repository pattern implementation
- Add examples for custom repositories
- Create migration guide for existing code
- Update architecture documentation
Updated at: March 26, 2025 - Created comprehensive documentation for the repository pattern in docs/examples/repository-pattern-example.md
Phase 5: Logging Service Generalization
-
Define Logging Interface
-
Create
LoggingProvider
trait - Abstract core logging operations
- Support structured logging
- Define log filtering and sampling interfaces
Updated at: March 26, 2025 - Implemented LoggingProvider trait with comprehensive interface for all logging operations
-
Create
-
Refactor Existing Logging Implementation
- Make current logging system a provider
- Create separate module for implementation
- Remove direct logging dependencies from core
- Add adapter pattern for current logger
Updated at: March 26, 2025 - Created TracingLoggerProvider to adapt the existing tracing-based logging to the new interface
-
Add Enterprise Logging Providers
- Create console logging provider
- Add support for structured logging format
- Implement provider registry for swappable implementations
- Support global context and child loggers
Updated at: March 26, 2025 - Implemented ConsoleLoggerProvider with colored output and created LoggingProviderRegistry for dynamic selection
Phase 6: Observability Service Generalization
-
Define Observability Interface
-
Create
ObservabilityProvider
trait - Define metrics, tracing, and profiling operations
- Support context propagation
- Create sampling and filtering mechanisms
Updated at: Not started
-
Create
-
Refactor Metrics Implementation
- Adapt current metrics to provider interface
- Create separate metrics module
- Implement adapter for current metrics
- Add telemetry correlation support
Updated at: Not started
-
Add Enterprise Observability Providers
- Create Dynatrace integration
- Add OpenTelemetry support
- Implement Prometheus metrics provider
- Support distributed tracing with Jaeger
Updated at: Not started
Phase 7: Configuration Service Generalization
-
Define Configuration Interface
-
Create
ConfigProvider
trait - Abstract configuration loading and refreshing
- Support environment-specific configuration
- Add configuration change notifications
Updated at: Not started
-
Create
-
Refactor Static Configuration
- Make current file-based config a provider
- Create separate config modules by domain
- Support hot reloading of configuration
- Add validation rules framework
Updated at: Not started
-
Add Dynamic Configuration Providers
- Implement environment variable provider
- Create AWS Parameter Store/Secrets Manager provider
- Add etcd/Consul KV integration
- Support feature flags and A/B testing config
Updated at: Not started
Implementation Status
- Overall Progress: 71% complete (Phases 1-5 fully completed)
- Last Updated: March 26, 2025
- Next Milestone: Begin Observability Service Generalization (Phase 6)
- Current Focus: Completed implementation of logging service generalization with provider registry and multiple implementations
Success Criteria
- No hardcoded service implementations in the core module
- All services defined by interfaces with at least two implementations
- 100% test coverage of interfaces and 90%+ for implementations
- Comprehensive documentation for extending services
- Migration guide for updating client code
- Performance metrics showing no regression from the refactoring
Detailed Implementation Guide
Step 1: Database Interface Implementation
Start by creating a clear abstraction for database operations:
#![allow(unused)] fn main() { /// Trait defining database operations pub trait DatabaseOperations: Send + Sync { /// Get a value from the database async fn get(&self, collection: &str, key: &str) -> Result<Option<String>, ServiceError>; /// Set a value in the database async fn set(&self, collection: &str, key: &str, value: &str) -> Result<(), ServiceError>; /// Delete a value from the database async fn delete(&self, collection: &str, key: &str) -> Result<bool, ServiceError>; /// Query the database with a filter async fn query(&self, collection: &str, filter: &str) -> Result<Vec<String>, ServiceError>; } /// Trait for database providers #[async_trait] pub trait DatabaseProvider: Send + Sync { /// The type of database this provider creates type Database: DatabaseOperations; /// Create a new database instance async fn create_database(&self, config: DatabaseConfig) -> Result<Self::Database, ServiceError>; /// Check if this provider supports the given configuration fn supports(&self, config: &DatabaseConfig) -> bool; } }
Step 2: Health Indicator Implementation
Enhance the existing health indicator system:
#![allow(unused)] fn main() { /// Extended HealthIndicator trait pub trait HealthIndicator: Send + Sync { /// Get the name of this health indicator fn name(&self) -> String; /// Check the health of this component fn check_health(&self, state: &Arc<AppState>) -> DependencyStatus; /// Get metadata about this indicator fn metadata(&self) -> HashMap<String, String> { HashMap::new() } /// Get the order in which this indicator should be checked fn order(&self) -> i32 { 0 } /// Whether this indicator is critical (failure means system is down) fn is_critical(&self) -> bool { false } } /// Health indicator provider trait pub trait HealthIndicatorProvider: Send + Sync { /// Create health indicators for the application fn create_indicators(&self) -> Vec<Box<dyn HealthIndicator>>; /// Whether this provider is enabled fn is_enabled(&self, config: &AppConfig) -> bool; } }
Step 3: Cache Implementation
Abstract the cache implementation:
#![allow(unused)] fn main() { /// Cache operations trait pub trait CacheOperations<T>: Send + Sync { /// Get a value from the cache async fn get(&self, key: &str) -> Option<T>; /// Set a value in the cache async fn set(&self, key: &str, value: T, ttl: Option<Duration>) -> Result<(), CacheError>; /// Delete a value from the cache async fn delete(&self, key: &str) -> Result<bool, CacheError>; /// Clear the cache async fn clear(&self) -> Result<(), CacheError>; /// Get cache statistics fn stats(&self) -> CacheStats; } /// Cache provider trait #[async_trait] pub trait CacheProvider: Send + Sync { /// Create a new cache instance async fn create_cache<T: Send + Sync + 'static>( &self, config: CacheConfig ) -> Result<Box<dyn CacheOperations<T>>, CacheError>; /// Check if this provider supports the given configuration fn supports(&self, config: &CacheConfig) -> bool; } }
Testing Strategy
For each refactored service:
- Define interface tests that work with any implementation
- Create mock implementations for testing
- Test both success and error paths
- Add integration tests for real-world scenarios
- Benchmark before and after to ensure no performance regression
Example test for database interface:
#![allow(unused)] fn main() { #[tokio::test] async fn test_database_interface() { // Create a mock database let db = MockDatabase::new(); // Set expectations db.expect_get() .with(eq("users"), eq("1")) .returning(|_, _| Ok(Some("Alice".to_string()))); // Test the interface let result = db.get("users", "1").await.unwrap(); assert_eq!(result, Some("Alice".to_string())); } }
Performance Considerations
While making services generic, maintain performance by:
- Using static dispatch where possible
- Avoiding unnecessary boxing
- Minimizing indirection
- Using async properly
- Implementing efficient provider discovery
Packaging and Distribution
To enable easy adoption:
- Create separate crates for provider implementations
- Use feature flags for optional providers
- Provide examples for each implementation
- Include benchmarks in documentation
title: "Server Customization System" description: "Modular build system and feature configuration framework" category: roadmap tags:
- architecture
- development
- documentation
- configuration last_updated: March 27, 2025 version: 1.6
Server Customization System Roadmap
Overview
The Server Customization System provides a robust framework for creating customized server deployments with tailored feature sets. This enables developers to generate optimized server binaries that include only necessary components, resulting in smaller deployments, reduced attack surface, and improved performance. The system includes a modular build system, feature dependency resolution, conditional compilation, and runtime feature detection.
Current Status
Implementation progress has reached 98% completion with the following major milestones achieved:
- Core feature selection framework β
- Feature dependency resolution β
- Configuration integration β
- Basic CLI functionality β
- Dependency analysis and optimization system β
- Documentation generation with error handling β
- Configuration examples generation β
- Feature import/export functionality β
- CLI visualization components β
Target State
A complete feature selection and customization system that allows developers to:
- Select specific features and components to include in server builds β
- Generate optimized server binaries with only necessary components β
- Resolve feature dependencies automatically β
- Create deployment packages for different environments β
- Generate feature-specific documentation β
- Provide a modern, interactive CLI experience β
- Optimize Cargo dependencies based on selected features β
Implementation Progress Tracking
Phase 1: Core Feature Selection Framework (Completed)
-
Define Feature Selection Framework β
- Created modular build system
- Implemented feature dependency resolution
- Added support for conditional compilation
- Implemented runtime feature detection
Updated at: March 26, 2025 - Core framework is fully functional with FeatureRegistry and RuntimeFeatures implementations
-
Create Packaging System β
- Support containerized deployments
- Implement binary optimization
- Add package versioning
- Create update mechanism
- Implement Cargo dependency analysis β
- Add dependency tree visualization β
Updated at: March 26, 2025 - Packaging system complete with dependency optimization
-
Add Documentation Generator β
- Create feature-specific documentation
- Generate API reference based on enabled features
- Add configuration examples for enabled providers
- Support documentation versioning
- Improve error handling for robust operation
Updated at: March 26, 2025 - Documentation generator completely implemented with comprehensive error handling and fixed all linter errors
Phase 2: CLI and User Experience (Completed)
-
Enhanced CLI Tool β
- Add interactive feature selection
- Implement dependency analysis commands
- Add progress indicators and animations
- Create visual dependency tree viewer
Updated at: March 26, 2025 - Progress: All CLI components implemented and tested
-
User Interface Improvements β
- Add color-coded status display
- Implement interactive menus
- Add progress bars and spinners
- Create dependency visualization
Updated at: March 26, 2025 - Progress: All UI elements implemented and tested
Phase 3: Testing and Validation (In Progress)
-
Comprehensive Testing
- [β] Add unit tests for all components
- [β] Implement integration tests
- [~] Create end-to-end test scenarios
- [~] Add performance benchmarks
Updated at: March 26, 2025 - Progress: Test coverage increased to 95%, implemented comprehensive RuntimeFeatures tests, added end-to-end tests for feature system integration and performance benchmarks
-
Documentation and Examples
- Create user guides
- Add example configurations
- [~] Document best practices
Updated at: March 26, 2025 - Progress: User guides completed and added to roadmap-instructions, example configurations added, best practices documentation started
Implementation Status
- Overall Progress: 99% complete
- Last Updated: March 26, 2025
- Next Milestone: Finish end-to-end testing
- Current Focus: Performance benchmarks
Next Steps
-
Complete comprehensive testing
- Implement final end-to-end test scenarios
- Complete remaining performance benchmarks
- Optimize based on benchmark results
-
Finalize user documentation
- Complete best practices documentation
- Add troubleshooting guide for common issues
Success Criteria
- Developers can generate custom server builds with only required features β
- Feature dependencies are automatically and correctly resolved β
- Build process successfully excludes unused code and dependencies β
- Documentation is generated according to enabled features β
- Deployment packages are optimized for different environments β
- Runtime feature detection allows for graceful feature availability handling β
- CLI correctly handles feature dependencies and provides clear feedback β
- Cargo dependencies are automatically optimized based on selected features β
- Robust error handling with helpful error messages for all operations β
- CLI provides intuitive visualization of feature dependencies and status β
Conclusion
The Server Customization System has made significant progress with the implementation of core functionality, dependency optimization, documentation generation, and CLI visualization components. The system now has robust error handling throughout all components and provides an intuitive user interface for managing features. The focus continues to be on comprehensive testing and documentation, with approximately 98% of the planned functionality successfully implemented.
Recent advancements include:
- Completed CLI visualization components with dependency tree viewer
- Implemented interactive feature selection with color-coded status display
- Added progress indicators and animations for better user feedback
- Created visual dependency graph generation
- Enhanced feature status display with size impact visualization
- Improved feature list formatting with multiple output formats
- Fixed module structure and import issues for proper test execution
- Increased test coverage to 90%, with improvements in error handling tests
- Enhanced error propagation between components to provide consistent error reporting
- Optimized dependency analysis to correctly handle edge cases
- Improved module organization for better maintainability and testing
- Implemented robust feature import/export functionality with proper error handling
The next phase will focus on completing the comprehensive testing suite and achieving the 95% test coverage target, with emphasis on end-to-end tests and performance benchmarks.
Detailed Implementation Guide
Step 1: Modular Build System
The first step is to create a modular build system that allows for feature toggling:
#![allow(unused)] fn main() { // Feature configuration structure #[derive(Debug, Clone, Deserialize)] pub struct FeatureConfig { /// Name of the feature pub name: String, /// Whether this feature is enabled pub enabled: bool, /// Feature dependencies pub dependencies: Vec<String>, /// Configuration specific to this feature pub config: HashMap<String, Value>, } // Feature registry for tracking available features pub struct FeatureRegistry { features: HashMap<String, FeatureConfig>, enabled_features: HashSet<String>, } impl FeatureRegistry { /// Register a new feature pub fn register(&mut self, feature: FeatureConfig) -> Result<(), FeatureError> { // Check for dependency cycles // Validate configuration // Add to registry Ok(()) } /// Check if a feature is enabled pub fn is_enabled(&self, feature_name: &str) -> bool { self.enabled_features.contains(feature_name) } /// Resolve feature dependencies pub fn resolve_dependencies(&mut self) -> Result<(), FeatureError> { // Topological sort of dependencies // Enable required dependencies // Report conflicts Ok(()) } } }
Step 2: Implementation Strategy
For conditional compilation:
- Use Cargo features for compile-time feature toggling
- Implement runtime feature detection for dynamic behavior
- Create macros for conditional execution based on features
#![allow(unused)] fn main() { // Example macro for feature-conditional code #[macro_export] macro_rules! when_feature_enabled { ($feature:expr, $body:block) => { if app_state.feature_registry().is_enabled($feature) { $body } }; } // Usage example when_feature_enabled!("advanced_metrics", { registry.register_counter("advanced.requests.total", "Total advanced requests processed"); }); }
Step 3: Enhanced CLI
The enhanced CLI will provide a modern, interactive experience:
- Use crates like
indicatif
for progress bars and spinners - Implement color-coded output with
colored
ortermcolor
- Add interactive feature selection with
dialoguer
- Create animated build processes with visual feedback
- Implement responsive terminal UI using
tui-rs
or similar
Step 4: Cargo Dependency Optimization
The dependency optimization system will:
- Analyze the Cargo.toml file for all dependencies
- Map dependencies to specific features
- Generate optimized Cargo.toml with only required dependencies
- Visualize dependency tree with feature relationships
- Identify and eliminate unused dependencies based on feature selection
Testing Strategy
For the feature selection framework:
-
Test feature dependency resolution with various scenarios:
- Simple dependencies
- Multi-level dependencies
- Circular dependencies (should fail)
- Optional vs. required dependencies
-
Test binary generation with different feature sets:
- Verify excluded code is not in binary
- Check that dependencies are properly included
- Validate runtime behavior matches compile-time selection
-
Test documentation generation:
- Verify feature-specific docs are included/excluded appropriately
- Check cross-references between features
-
Test dependency optimization:
- Verify unused dependencies are properly removed
- Confirm that necessary transitive dependencies are preserved
- Check that generated Cargo.toml is valid
Additional Enhancement Opportunities
Beyond the currently planned enhancements, the following areas could further improve the Server Customization System:
-
Templated Project Generation
- Add templates for common server configurations (API-only, full-stack, microservice)
- Create a starter template system for new projects
- Generate projects with pre-configured features and dependencies
-
Cloud Integration
- Add deployment profiles for major cloud providers (AWS, Azure, GCP)
- Generate cloud-specific infrastructure as code
- Create deployment pipelines for CI/CD systems
-
Plugin System
- Implement a plugin architecture for custom feature providers
- Allow third-party features to be integrated
- Support dynamic loading of plugins at runtime
-
Feature Health Monitoring
- Add telemetry to track feature usage in production
- Implement health checks for each feature
- Create dashboards to visualize feature performance
-
Configuration Validation
- Implement JSON Schema validation for feature configurations
- Add static analysis of configuration values
- Provide interactive configuration validation in CLI
-
Advanced Profiling
- Add memory and performance profiling for each feature
- Generate resource utilization reports
- Provide recommendations for optimal feature combinations
These enhancements would further improve developer experience, deployment efficiency, and operational stability of customized server deployments.
Implementation Details
Feature Registry Implementation
The Feature Registry serves as the central component for tracking available features and their dependencies. The implementation includes:
#![allow(unused)] fn main() { // Core feature registry implementation pub struct FeatureRegistry { /// Available features with their metadata features: HashMap<String, FeatureInfo>, /// Feature groups for organization groups: HashMap<String, Vec<String>>, /// Currently enabled features enabled_features: HashSet<String>, } impl FeatureRegistry { /// Create a new feature registry with default features pub fn new() -> Self { let mut registry = Self { features: HashMap::new(), groups: HashMap::new(), enabled_features: HashSet::new(), }; // Register core features that are always enabled registry.register_core_features(); // Register optional features registry.register_optional_features(); // Select default features registry.select_defaults(); registry } /// Register a feature pub fn register(&mut self, feature: FeatureInfo) -> Result<(), FeatureError> { // Validate feature information if feature.name.is_empty() { return Err(FeatureError::ValidationError("Feature name cannot be empty".to_string())); } // Check for existing feature if self.features.contains_key(&feature.name) { return Err(FeatureError::DuplicateFeature(feature.name)); } // Store feature information self.features.insert(feature.name.clone(), feature); Ok(()) } /// Enable a feature and its dependencies pub fn enable(&mut self, feature_name: &str) -> Result<(), FeatureError> { // Check feature exists if !self.features.contains_key(feature_name) { return Err(FeatureError::UnknownFeature(feature_name.to_string())); } // Add to enabled set self.enabled_features.insert(feature_name.to_string()); // Enable dependencies let dependencies = { let feature = self.features.get(feature_name).unwrap(); feature.dependencies.clone() }; for dep in dependencies { self.enable(&dep)?; } Ok(()) } /// Validate that all feature dependencies are satisfied pub fn validate(&self) -> Result<(), FeatureError> { for feature_name in &self.enabled_features { let feature = self.features.get(feature_name) .ok_or_else(|| FeatureError::UnknownFeature(feature_name.clone()))?; for dep in &feature.dependencies { if !self.enabled_features.contains(dep) { return Err(FeatureError::MissingDependency( feature_name.clone(), dep.clone(), )); } } } Ok(()) } } }
Runtime Feature Detection
The runtime feature detection system allows for conditional code execution based on enabled features:
#![allow(unused)] fn main() { pub struct RuntimeFeatures { /// Currently enabled features enabled: HashSet<String>, /// Status of features (enabled/disabled) status: HashMap<String, bool>, } impl RuntimeFeatures { /// Create from the feature registry pub fn from_registry(registry: &FeatureRegistry) -> Self { let enabled = registry.enabled_features().clone(); let mut status = HashMap::new(); for feature in registry.features() { status.insert( feature.name.clone(), registry.is_enabled(&feature.name), ); } Self { enabled, status, } } /// Check if a feature is enabled pub fn is_enabled(&self, feature: &str) -> bool { self.enabled.contains(feature) } /// Enable a feature at runtime pub fn enable(&mut self, feature: &str) { self.enabled.insert(feature.to_string()); self.status.insert(feature.to_string(), true); } /// Disable a feature at runtime pub fn disable(&mut self, feature: &str) { self.enabled.remove(feature); self.status.insert(feature.to_string(), false); } } }
Conclusion
The Server Customization System provides a robust framework for creating tailored server deployments with optimized feature sets. The modular design allows for flexible configuration of enabled features, automatic dependency resolution, and efficient binary generation.
The system has been successfully implemented with approximately 95% of the planned functionality, including:
- Feature selection framework with dependency resolution
- Configuration integration with the core application
- Documentation generation based on enabled features
- Packaging system for optimized deployments
- CLI interface for feature management
- Dependency analysis and optimization system
Future development will focus on enhancing the interactive CLI experience, implementing Cargo dependency analysis for further optimization, and expanding the feature set with the enhancement opportunities outlined above.
Next Steps
- Complete Interactive CLI: Finish the implementation of the modern, interactive CLI with animations and visual feedback
- Implement Dependency Analysis: Add the Cargo dependency analyzer to optimize builds
- Expand Test Coverage: Add more comprehensive tests for feature interactions
- Create User Documentation: Develop user guides for working with the feature system
- Evaluate Plugin System: Begin design for the plugin architecture as the next major enhancement
Details
Detailed information about the topic.
Examples
Related Information
title: Navius Documentation description: Comprehensive documentation for the Navius framework category: index tags:
- documentation
- index
- overview related:
- 01_getting_started/
- 04_guides/
- 05_reference/
- 03_contributing/
- 02_examples/ last_updated: March 27, 2025 version: 1.0
Navius Documentation
This repository contains the official documentation for the Navius framework, providing comprehensive guides, tutorials, references, and examples for developers building applications with Navius.
Documentation Structure
The documentation is organized into clear sections to help you find what you need:
Getting Started
Everything you need to start using Navius, including installation, quickstart guide, and basic concepts.
Examples
Practical code examples demonstrating how to implement common features and solve typical challenges.
- Basic application examples
- Integration examples
- Advanced feature implementations
- Sample projects
Contributing
Guidelines for contributing to Navius, including code standards, pull request process, and development workflow.
Guides
Comprehensive guides for developing applications with Navius, organized by topic.
-
Development
-
Features
-
Security
-
Performance
Reference
Detailed technical reference documentation for Navius APIs, configuration options, and patterns.
-
API Reference
-
Configuration Reference
-
Patterns
Documentation Highlights
- Comprehensive API Reference: Detailed documentation for all Navius APIs with request/response examples, error handling, and integration patterns.
- Step-by-Step Guides: Clear, actionable guides for implementing common features and best practices.
- Practical Examples: Real-world code examples that demonstrate how to use Navius effectively.
- Development Best Practices: Guidance on IDE setup, testing, debugging, and performance optimization.
- Security Implementation: Detailed guides on implementing authentication, authorization, and data protection.
Using the Documentation
For New Users
If you're new to Navius, start with:
- Installation Guide to set up Navius
- Quickstart Guide to create your first application
- Hello World Tutorial for a step-by-step walkthrough
For Regular Developers
If you're already using Navius:
- Explore the Guides for implementing specific features
- Refer to the API Reference for detailed technical information
- Check out Examples for code samples and patterns
For Contributors
To contribute to Navius:
- Read the Contribution Guidelines
- Follow the Development Workflow
- Submit your changes following the Pull Request Process
Documentation Updates
This documentation is continuously improved. Recent updates include:
- Enhanced API reference documentation with comprehensive examples
- New comprehensive security guides
- Improved development setup and IDE configuration guidance
- Expanded testing and debugging documentation
Support
If you have questions about using Navius or need help with the documentation:
- GitHub Issues for bug reports and feature requests
- Discord Community for community support and discussions
- Stack Overflow using the 'navius' tag
License
This documentation is licensed under the MIT License.
π Documentation Sections
π Getting Started
Quick start guides to get up and running with Navius:
- Installation - How to install Navius
- Development Setup - Setting up your development environment
- First Steps - Getting started with Navius
π Examples
Practical code examples:
- Overview - Introduction to examples
- Spring Boot Comparison - Comparing with Spring Boot
- Two-Tier Cache Implementation - Implementing two-tier caching
- Server Customization System - Using the feature system
- Repository Pattern Example - Implementing the generic repository pattern
- Logging Service Example - Using the generic logging service
- Database Service Example - Working with the generic database service
- Health Service Example - Creating custom health indicators
- Cache Provider Example - Using the generic cache providers
π€ Contributing
Guidelines for contributors:
- Overview - Introduction to contributing
- Contributing Guide - How to contribute
- Code of Conduct - Community guidelines
- Development Process - Development workflow
- Testing Guidelines - Writing tests
- Onboarding - Getting started as a contributor
- IDE Setup - Setting up your development environment
- Testing Prompt - Testing guidelines
- Test Implementation Template - Templates for tests
π οΈ Guides
Practical guides for using Navius:
-
Overview - Introduction to Navius guides
-
Development - Development workflow and practices
- Development Workflow - Day-to-day development process
- Testing Guide - How to test Navius applications
- Debugging Guide - Debugging your applications
- IDE Setup - Setting up your development environment
- Git Workflow - Version control practices
-
Features - Implementing specific features
- Authentication - Implementing authentication
- API Integration - Integrating with external APIs
- PostgreSQL Integration - Working with PostgreSQL in features
- Redis Caching - Implementing basic caching
- Server Customization CLI - Using the feature selection CLI
- WebSocket Support - Real-time communication
-
Deployment - Deploying Navius applications
- Production Deployment - Deploying to production
- Docker Deployment - Working with Docker
- AWS Deployment - Deploying to AWS
- Kubernetes Deployment - Deploying to Kubernetes
-
Caching Strategies - Advanced caching with two-tier cache
-
PostgreSQL Integration - Comprehensive PostgreSQL integration
-
Application Structure - App structure guide
-
Configuration - Configuration guide
-
Dependency Injection - DI guide
-
Error Handling - Error handling guide
-
Feature Selection - Feature selection guide
-
Service Registration - Service registration guide
-
Testing - Testing guide
π Reference
Technical reference documentation:
-
Overview - Introduction to reference documentation
-
API - API documentation
- API Resources - Core API resources
- Authentication API - Authentication endpoints
- Database API - Database interaction APIs
-
Architecture - Architecture patterns and principles
- Principles - Architectural principles
- Project Structure - Project structure overview
- Project Structure Recommendations - Recommended structure
- Directory Organization - How directories are organized
- Component Architecture - Component design
- Design Principles - Design principles
- Extension Points - Extension points
- Module Dependencies - Module dependencies
- Provider Architecture - Provider architecture
- Service Architecture - Service architecture
- Spring Boot Migration - Spring Boot migration
-
Auth - Authentication documentation
- Error Handling - Auth error handling
- Auth Circuit Breaker - Auth circuit breaker
- Auth Metrics - Auth metrics
- Auth Provider Implementation - Auth provider implementation
-
Configuration - Configuration options and settings
- Environment Variables - Environment configuration
- Application Config - Application settings
- Cache Config - Cache system configuration
- Feature Config - Server customization configuration
- Logging Config - Logging configuration
- Security Config - Security settings
-
Patterns - Common design patterns
- API Resource Pattern - API design patterns
- Import Patterns - Module import patterns
- Caching Patterns - Effective caching strategies
- Error Handling - Error handling approaches
- Testing Patterns - Testing best practices
- Repository Pattern - Entity repository pattern
- Logging Service Pattern - Generic logging service implementations
-
Standards - Code and documentation standards
- Naming Conventions - Naming guidelines
- Code Style - Code formatting standards
- Generated Code - Generated code guidelines
- Security Standards - Security best practices
- Documentation Standards - Documentation guidelines
- Configuration Standards - Configuration standards
- Error Handling Standards - Error handling standards
- Error Handling - Error handling guide
-
Generated - Generated reference documentation
- API Index - API index
- Configuration Index - Configuration index
- Development Configuration - Development configuration
- Production Configuration - Production configuration
- Testing Configuration - Testing configuration
- Features Index - Features index
πΊοΈ Roadmaps
Project roadmaps and future plans:
- Overview - Introduction to project roadmaps
- Template for Updating - How to update roadmaps
- Dependency Injection - DI implementation roadmap
- Database Integration - Database features roadmap
- Testing Framework - Testing capabilities roadmap
𧩠Miscellaneous
Additional resources and documentation:
- Feature System - Overview of the feature system
- Testing Guidance - Additional testing guidance
- Document Template - Documentation template
- Migration Plan - Documentation migration plan
π Documentation Search
Use the search functionality in the top bar to search through all documentation, or use your browser's search (Ctrl+F / Cmd+F) to search within the current page.
π Documentation Standards
All documentation follows these standards:
- Frontmatter: Each document includes metadata in the YAML frontmatter
- Structure: Clear headings and subheadings with logical progression
- Code Examples: Practical examples with syntax highlighting
- Cross-referencing: Links to related documentation
- Up-to-date: Regular reviews and updates to ensure accuracy
π Need Help?
If you can't find what you're looking for, please:
- Check the GitLab Issues for known documentation issues
- Open a new documentation issue if you find something missing or incorrect