Back to Blog
Blockchain Security2025-12-0218 mins read

Native Rust on Solana: 7 Security Mistakes That Drain Protocols

Pushkar Mishra
Pushkar Mishra
Security Expert
#Solana#Rust#Security#Smart Contracts#DeFi#Vulnerabilities#Native Rust#Audit#PDA#Solana Security Audit#Anchor vs Native Rust#Solana Vulnerabilities

Native Rust isn't a silver bullet for avoiding mistakes in Solana code

Solana development is notoriously error-prone. While many developers reach for Anchor to simplify their code and reduce common mistakes, some teams choose native Rust for maximum control, performance optimization, or specific architectural requirements. But here's the catch: native Rust development introduces its own class of subtle, dangerous vulnerabilities that can slip past even experienced developers.

Let's explore some of them.

Background

Writing secure Solana programs without a framework is an exercise in meticulous attention to detail. You're responsible for everything: account validation, PDA derivation, serialization, access control, and state management. There's no safety net.

Native Rust development on Solana requires intimate familiarity with:

  • Manual account deserialization and validation
  • PDA seed derivation and bump management
  • State synchronization across multiple accounts
  • Token program interactions via CPI
  • Rent and account lifecycle management
  • Sequential operation validation

As a smart-contract auditing firm, we've reviewed countless Solana programs across DeFi protocols, NFT marketplaces, gaming platforms, and more. Even well-structured codebases with experienced developers behind them regularly exhibit these vulnerability patterns. These bugs' impacts range from denial-of-service to permanent loss of user funds.

Building on Solana? The vulnerability patterns below have cost protocols millions. Get expert security review before deploying to mainnet.

State Inconsistency: When Your Tracking List Lies

One of the most insidious vulnerability patterns in native Rust programs is state inconsistency between tracking structures and actual on-chain state. This happens when your program maintains a list or vector to track something (like which accounts exist or which operations are valid), but that list gets out of sync with the actual PDAs on-chain.

The Pattern

Many protocols maintain tracking lists for efficiency. An AMM might track active liquidity pools, a lending protocol might track valid collateral markets, or a rewards system might track distribution epochs. These lists let you iterate without scanning the entire account space.

pub struct ProtocolConfig {
    pub authority: Pubkey,
    pub total_value_locked: u64,
    pub active_markets: Vec<u64>,  // Tracks which market IDs are active
}

To prevent unbounded growth, protocols often compact these lists, removing old or inactive entries:

// Clean up old markets during maintenance
config.active_markets.retain(|&id| id >= current_period);

Seems reasonable, right? We're just cleaning up old data. But here's where things go wrong.

The Vulnerability

Problems emerge when other functions use this list as a gatekeeper:

pub fn process_user_action(ctx: Context, market_id: u64) -> ProgramResult {
    let config = &ctx.accounts.config;
    
    // Check if market exists using the tracking list
    if !config.active_markets.contains(&market_id) {
        msg!("Market {} doesn't exist, skipping", market_id);
        return Ok(()); // Silent success with no action
    }
    
    // ... process the action
}

The critical issue: the actual Market PDAs are never closed when removed from the tracking list. They still exist on-chain, potentially holding user funds or state, but the program thinks they don't exist.

Consider a user who deposited into market 5. After compaction removes market 5 from active_markets:

  1. User calls withdraw(market_id=5)
  2. The tracking list check fails
  3. Function returns Ok(()) (silent success)
  4. User's state might be updated (marking withdrawal as "processed")
  5. Their funds remain locked in the Market PDA forever

Real-World Manifestations

This pattern appears across protocol types:

  • Lending protocols: Tracking valid collateral types, then removing support without migrating positions
  • DEXs: Tracking active trading pairs, then deprecating pools with liquidity still locked
  • Reward systems: Tracking claimable epochs, then compacting history before all users claim
  • NFT marketplaces: Tracking active listings, then removing old entries while bids remain

The Fix

Never use a tracking list as the source of truth for PDA existence. Always validate actual on-chain state:

pub fn process_user_action(ctx: Context, market_id: u64) -> ProgramResult {
    let market_account = &ctx.accounts.market;
    
    // Check actual PDA state, not a tracking list
    if market_account.data_is_empty() {
        return Err(ProtocolError::MarketNotFound.into());
    }
    
    let market = Market::try_from_slice(&market_account.data.borrow())?;
    if !market.is_initialized {
        return Err(ProtocolError::MarketNotInitialized.into());
    }
    
    // ... process with confidence
}

Treat tracking lists as optimization hints for iteration, not security gates for access control.

Sequential Validation Gaps: The First-Operation Trap

Sequential operations are everywhere in DeFi. Users must claim rewards epoch-by-epoch, process queued withdrawals in order, vest tokens according to schedules, or execute multi-step liquidations. The validation logic for these sequences often has a subtle gap: the first operation.

The Pattern

A protocol enforces sequential processing to maintain invariants:

pub fn process_claim(ctx: Context, epoch: u64) -> ProgramResult {
    let user_state = &mut ctx.accounts.user_state;
    
    // Enforce sequential claiming
    if user_state.last_processed_epoch > 0 {
        if epoch != user_state.last_processed_epoch + 1 {
            msg!("Must claim epochs sequentially");
            return Err(ProgramError::InvalidArgument);
        }
    }
    
    // Process claim...
    user_state.last_processed_epoch = epoch;
    
    Ok(())
}

The logic reads: "if the user has processed before, they must process the next epoch in sequence."

The Vulnerability

What happens when last_processed_epoch == 0? The condition evaluates to false, and no validation occurs for the first operation.

A user eligible for epochs 1 through 5 can:

  1. Call process_claim(epoch=5) as their first claim
  2. Since last_processed_epoch is 0, the sequential check is skipped
  3. Epoch 5 is processed successfully
  4. last_processed_epoch is set to 5
  5. Epochs 1-4 become permanently inaccessible due to sequential validation

The user just lost 80% of their earned value through a single misclick or malicious frontend.

Real-World Manifestations

This pattern appears in:

  • Vesting contracts: Users can claim the final vesting tranche, skipping earlier ones
  • Reward distribution: Users skip to the latest epoch, forfeiting historical rewards
  • Batch processing: First batch can be any batch, breaking assumed ordering
  • Upgrade migrations: Users can migrate to v3 directly, skipping required v2 state transitions

The Fix

Always validate the first operation explicitly:

pub fn process_claim(ctx: Context, epoch: u64) -> ProgramResult {
    let user_state = &mut ctx.accounts.user_state;
    
    if user_state.last_processed_epoch == 0 {
        // First claim must be the user's earliest eligible epoch
        let first_eligible = user_state.first_eligible_epoch;
        if epoch != first_eligible {
            msg!("First claim must start at epoch {}", first_eligible);
            return Err(ProgramError::InvalidArgument);
        }
    } else {
        // Subsequent claims must be sequential
        if epoch != user_state.last_processed_epoch + 1 {
            msg!("Must claim epochs sequentially");
            return Err(ProgramError::InvalidArgument);
        }
    }
    
    // Process claim...
    user_state.last_processed_epoch = epoch;
    
    Ok(())
}

Better yet, don't rely on users to specify the epoch at all:

pub fn process_next_claim(ctx: Context) -> ProgramResult {
    let user_state = &mut ctx.accounts.user_state;
    
    // Automatically determine the next epoch to process
    let next_epoch = if user_state.last_processed_epoch == 0 {
        user_state.first_eligible_epoch
    } else {
        user_state.last_processed_epoch + 1
    };
    
    // Process claim for next_epoch...
}

Exit Logic Pitfalls: The Inescapable Position

Protocols often implement minimum thresholds: minimum stake amounts, minimum liquidity provision, minimum collateral ratios. But implementing these requirements for withdrawal operations requires careful thought about all exit paths.

The Pattern

Consider a protocol with minimum position requirements:

pub fn process_withdraw(ctx: Context, amount: u64) -> ProgramResult {
    let config = &ctx.accounts.config;
    let position = &mut ctx.accounts.position;
    
    let new_position_size = position.deposited.checked_sub(amount)
        .ok_or(ProgramError::InsufficientFunds)?;
    
    // Enforce minimum position size
    if new_position_size < config.min_position_size {
        msg!("Position would fall below minimum of {}", config.min_position_size);
        return Err(ProgramError::InvalidArgument);
    }
    
    // Process withdrawal...
}

This ensures positions always meet the minimum threshold. Sounds like good protocol design.

The Vulnerability

Issue 1: Users Can Never Fully Exit

If a user deposits exactly the minimum (or their position shrinks to near-minimum through other mechanisms), they can never withdraw fully. Some portion of their funds is permanently locked with no exit mechanism.

User deposits: 1000 tokens
Minimum: 100 tokens
Max withdrawable: 900 tokens
Trapped forever: 100 tokens

Issue 2: Shared State Race Conditions

If the minimum check references global state that multiple users affect, race conditions emerge:

// Global total must stay above minimum
if config.total_deposits - amount < config.min_total_deposits {
    return Err(ProgramError::InsufficientFunds);
}

Attack scenario:

  1. Protocol has 1000 total deposits, minimum is 100
  2. Alice has 600 deposited, Bob has 400
  3. Alice withdraws 900 (total becomes 100, still valid)
  4. Bob tries to withdraw his 400
  5. Underflow or constraint violation: Bob's funds are trapped

Alice front-ran Bob and drained the shared buffer, making Bob's legitimate withdrawal impossible.

Real-World Manifestations

  • Staking protocols: Minimum stake requirements that trap dust amounts
  • Lending protocols: Minimum collateral that prevents full position closure
  • AMM pools: Minimum liquidity requirements that trap LP tokens
  • Governance: Minimum voting power thresholds that lock tokens

The Fix

Always provide an explicit full-exit path:

pub fn process_withdraw(ctx: Context, amount: u64) -> ProgramResult {
    let config = &ctx.accounts.config;
    let position = &mut ctx.accounts.position;
    
    let new_position_size = position.deposited.checked_sub(amount)
        .ok_or(ProgramError::InsufficientFunds)?;
    
    // Allow full exit OR enforce minimum
    if new_position_size != 0 && new_position_size < config.min_position_size {
        msg!("Partial withdrawal would leave position below minimum");
        msg!("Either withdraw less or withdraw everything");
        return Err(ProgramError::InvalidArgument);
    }
    
    // If full exit, close the position account and return rent
    if new_position_size == 0 {
        close_position_account(ctx)?;
    }
    
    // Process withdrawal...
}

For shared state constraints, track individual contributions separately:

pub struct Position {
    pub owner: Pubkey,
    pub deposited: u64,
    pub min_required: u64,  // Per-position minimum, set at deposit time
}

These exit logic bugs trap user funds permanently. We've identified similar patterns across lending protocols, AMMs, and staking platforms. Book a comprehensive audit to ensure your users can always exit safely.

Account Lifecycle Mismanagement: The Lamport Leak

Solana's rent system requires accounts to maintain a minimum balance to remain on-chain. When accounts are no longer needed, they should be closed and their lamports returned to the payer. Forgetting this step creates a lamport leak, and sometimes worse.

The Pattern

A protocol creates accounts during user onboarding or operation setup:

pub fn initialize_user_account(ctx: Context) -> ProgramResult {
    // Create PDA for user
    let user_account = &ctx.accounts.user_account;
    let payer = &ctx.accounts.payer;
    
    // Payer funds the rent
    create_account(
        payer,
        user_account,
        required_lamports,
        account_size,
        program_id,
    )?;
    
    // Initialize state...
    Ok(())
}

When the user closes their position or leaves the protocol:

pub fn close_position(ctx: Context) -> ProgramResult {
    let position = &mut ctx.accounts.position;
    
    // Transfer tokens back to user
    transfer_tokens(position.deposited, user_token_account)?;
    
    // Reset position state
    position.deposited = 0;
    position.is_active = false;
    
    Ok(())  // Account stays open, rent is trapped
}

The Vulnerability

The position PDA remains open after closure. The rent lamports (often 0.002-0.003 SOL per account) stay locked. At scale:

  • 10,000 users × 0.002 SOL = 20 SOL trapped
  • 100,000 users × 0.002 SOL = 200 SOL trapped

Beyond the economic waste, orphaned accounts create other issues:

  1. State confusion: Is a zeroed account "closed" or "never initialized"?
  2. Reinitialization attacks: Can someone reinitialize an orphaned account?
  3. Bloated indexing: Off-chain indexers must track accounts that should be gone

Real-World Manifestations

  • Token accounts: ATAs created for one-time transfers, never closed
  • Escrow accounts: Escrows settled but accounts left open
  • Order books: Filled or cancelled orders leaving account dust
  • Gaming: Completed game sessions leaving orphaned state accounts

The Fix

Always close accounts when they're no longer needed:

pub fn close_position(ctx: Context) -> ProgramResult {
    let position = &ctx.accounts.position;
    let user = &ctx.accounts.user;
    
    // Transfer tokens back to user
    transfer_tokens(position.deposited, user_token_account)?;
    
    // Close the account and return rent to user
    let dest_starting_lamports = user.lamports();
    let position_lamports = position.to_account_info().lamports();
    
    **user.lamports.borrow_mut() = dest_starting_lamports
        .checked_add(position_lamports)
        .ok_or(ProgramError::ArithmeticOverflow)?;
    **position.to_account_info().lamports.borrow_mut() = 0;
    
    // Zero the account data (security best practice)
    let mut position_data = position.to_account_info().data.borrow_mut();
    position_data.fill(0);
    
    Ok(())
}

For token accounts, use the SPL Token close_account instruction:

let close_ix = spl_token::instruction::close_account(
    token_program.key,
    token_account.key,
    destination.key,  // Receives the lamports
    authority.key,
    &[],
)?;

invoke_signed(&close_ix, accounts, signer_seeds)?;

Input Validation Gaps: The Zero-Value Attack

Input validation seems obvious, but in the pressure to ship features, basic checks often get overlooked. One common oversight: accepting zero or near-zero amounts.

The Pattern

Protocol functions accept user-provided amounts:

pub fn deposit(ctx: Context, amount: u64) -> ProgramResult {
    let vault = &ctx.accounts.vault;
    let user = &ctx.accounts.user;
    
    // Transfer tokens
    transfer(user_token_account, vault, amount)?;
    
    // Update state
    user_position.deposited += amount;
    vault.total_deposits += amount;
    
    // Emit event
    emit!(DepositEvent { user: user.key(), amount });
    
    Ok(())
}

No validation on amount. What happens when amount == 0?

The Vulnerability

Zero-value transactions can:

  1. Spam the protocol: Anyone can flood with zero-value operations
  2. Pollute state: Increment counters, update timestamps, create history entries
  3. Spam events: Fill logs and indexers with meaningless data
  4. Grief compute budgets: Force the program to execute logic for no economic value
  5. Bypass rate limits: If rate limiting counts operations, not value
  6. Game reward systems: Some reward calculations might divide by zero or behave unexpectedly

More subtle: near-zero values that pass validation but create dust:

// User deposits 1 lamport (0.000000001 SOL)
// Creates a full account entry
// May earn minimum rewards
// Never economically viable to withdraw

Real-World Manifestations

  • Airdrops: Zero-value claims that create account entries
  • DEXs: Zero-value swaps that update price oracles
  • Lending: Dust deposits that earn minimum interest allocations
  • NFT marketplaces: Zero-value bids that clutter listings

The Fix

Validate inputs early and comprehensively:

pub fn deposit(ctx: Context, amount: u64) -> ProgramResult {
    // Validate amount immediately
    if amount == 0 {
        msg!("Deposit amount must be greater than 0");
        return Err(ProgramError::InvalidArgument);
    }
    
    // Optional: enforce meaningful minimum
    let min_deposit = ctx.accounts.config.min_deposit_amount;
    if amount < min_deposit {
        msg!("Deposit must be at least {}", min_deposit);
        return Err(ProgramError::InvalidArgument);
    }
    
    // Now process...
}

Consider economic minimums based on transaction costs:

// Deposit should be worth more than the transaction cost to withdraw
const MIN_MEANINGFUL_DEPOSIT: u64 = 10_000; // Example: 0.00001 SOL worth

Arithmetic Inconsistencies: The Creeping Overflow

Rust's default arithmetic operators panic on overflow in debug builds but silently wrap in release builds. Solana programs compile in release mode. This means overflows and underflows silently produce wrong values.

The Pattern

Developers might be careful in critical calculations:

// Careful: using checked arithmetic
let new_balance = old_balance.checked_add(deposit)
    .ok_or(ProgramError::ArithmeticOverflow)?;

But then use unchecked arithmetic elsewhere:

// Oops: unchecked in lamport transfer
**source.lamports.borrow_mut() -= transfer_amount;
**dest.lamports.borrow_mut() += transfer_amount;

The Vulnerability

Inconsistent arithmetic handling creates subtle bugs:

// This function is "safe"
pub fn calculate_rewards(stake: u64, rate: u64) -> Result<u64> {
    stake.checked_mul(rate)
        .ok_or(MathError::Overflow)?
        .checked_div(PRECISION)
        .ok_or(MathError::DivisionByZero)
}

// But this function wraps silently
pub fn update_totals(ctx: Context, amount: u64) -> ProgramResult {
    let state = &mut ctx.accounts.state;
    state.total_deposits += amount;  // Silent overflow!
    state.deposit_count += 1;        // Silent overflow!
    Ok(())
}

An attacker who can trigger overflow in total_deposits might:

  • Reset protocol TVL tracking to zero
  • Break reward calculations that use total deposits
  • Bypass caps or limits based on total deposits

The Fix

Use checked arithmetic consistently everywhere:

pub fn update_totals(ctx: Context, amount: u64) -> ProgramResult {
    let state = &mut ctx.accounts.state;
    
    state.total_deposits = state.total_deposits
        .checked_add(amount)
        .ok_or(ProgramError::ArithmeticOverflow)?;
    
    state.deposit_count = state.deposit_count
        .checked_add(1)
        .ok_or(ProgramError::ArithmeticOverflow)?;
    
    Ok(())
}

Or use a wrapper type that enforces checked arithmetic:

use uint::construct_uint;
construct_uint! {
    pub struct U256(4);
}

// Intermediate calculations in U256, convert back with overflow check
let result_u256 = U256::from(a) * U256::from(b) / U256::from(c);
let result: u64 = result_u256.try_into()
    .map_err(|_| ProgramError::ArithmeticOverflow)?;

PDA Validation: Trust But Verify

PDAs (Program Derived Addresses) are fundamental to Solana development, but native Rust requires manual validation that's easy to get wrong.

The Pattern

A function expects a specific PDA:

pub fn withdraw_from_vault(ctx: Context, vault_id: u64) -> ProgramResult {
    let vault = &ctx.accounts.vault;
    let authority = &ctx.accounts.authority;
    
    // Derive expected PDA
    let (expected_pda, bump) = Pubkey::find_program_address(
        &[b"vault", &vault_id.to_le_bytes()],
        program_id,
    );
    
    // Validate
    if vault.key() != expected_pda {
        return Err(ProgramError::InvalidAccountData);
    }
    
    // Process withdrawal...
}

The Vulnerability

Common PDA validation mistakes:

1. Forgetting to validate entirely:

// Trusts that the passed account is the right PDA
pub fn withdraw(ctx: Context) -> ProgramResult {
    let vault = &ctx.accounts.vault;  // Could be any account!
    // ... process
}

2. Using wrong seeds:

// Derived with vault_id but should include user pubkey
let (expected_pda, _) = Pubkey::find_program_address(
    &[b"vault", &vault_id.to_le_bytes()],
    program_id,
);
// Attacker can access any user's vault with the same vault_id

3. Inconsistent seed ordering:

// Creation uses [b"vault", user, id]
// Validation uses [b"vault", id, user]
// Different PDAs!

4. Not validating bump:

// Attacker might pass a different bump seed, creating a different PDA
// that happens to match some other account

The Fix

Create a centralized PDA derivation module and use it consistently:

pub mod pda {
    pub fn get_vault_address(user: &Pubkey, vault_id: u64, program_id: &Pubkey) -> (Pubkey, u8) {
        Pubkey::find_program_address(
            &[
                b"vault",
                user.as_ref(),
                &vault_id.to_le_bytes(),
            ],
            program_id,
        )
    }
    
    pub fn validate_vault(
        account: &AccountInfo,
        user: &Pubkey,
        vault_id: u64,
        program_id: &Pubkey,
    ) -> ProgramResult {
        let (expected, _) = get_vault_address(user, vault_id, program_id);
        if account.key != &expected {
            msg!("Invalid vault PDA");
            return Err(ProgramError::InvalidAccountData);
        }
        if account.owner != program_id {
            msg!("Vault not owned by program");
            return Err(ProgramError::IllegalOwner);
        }
        Ok(())
    }
}

Conclusion

Native Rust development on Solana gives you maximum control, but that control comes with responsibility. The vulnerabilities we've explored share common themes:

  1. State synchronization: When you track state in multiple places, they can drift apart
  2. Boundary conditions: First operations, last operations, and empty states need explicit handling
  3. Exit paths: Every way into a system needs a corresponding way out
  4. Resource lifecycle: Create, use, and destroy (don't forget the destroy)
  5. Input validation: Trust nothing from external callers
  6. Arithmetic safety: Consistent checked arithmetic everywhere, not just "critical" paths
  7. PDA discipline: Centralized derivation, comprehensive validation

These bugs' impacts range from transaction spam to complete, permanent loss of user funds. Well-commented code that follows best practices can still harbor these vulnerabilities. They emerge from the gap between what developers assume and what the code actually enforces.

Native Rust isn't inherently less secure than Anchor, but it does shift more responsibility onto your shoulders. Whether you're writing native Rust or using a framework, the fundamental lesson remains: security requires systematic thinking about every state transition, every edge case, and every assumption your code makes.

The best code isn't just code that works. It's code that fails safely when assumptions are violated.

(For a comparison of how Solana's security model differs from Move-based chains, see our Sui vs Solana deep dive.)

Don't let these vulnerabilities drain your protocol. We've audited native Rust Solana programs across DeFi, gaming, and NFT platforms, identifying critical bugs before attackers could exploit them. Book your audit today and ship with confidence.

Found this helpful? Share it with your team.