Skip to main content

Tentacle

Tentacle is a distributed hashcat worker that integrates with the Kraken Orchestra system. It acts as an “Instrument” in Orchestra terminology - a worker node that registers with a Conductor (scheduler), pulls work when ready, executes hashcat attacks, and reports results back.

Overview

Tentacle workers are designed to run on GPU-enabled Linux systems and process password cracking jobs distributed by the Kraken server. Each worker:
  • Registers with the Orchestra Conductor (running on the Krkn server)
  • Maintains a persistent connection via gRPC
  • Pulls work items when capacity is available
  • Downloads required files (hashlists, wordlists, rules, masks) on-demand
  • Executes hashcat attacks using local GPU resources
  • Streams results back to the conductor
  • Handles failures gracefully with automatic retry logic

Architecture

Tentacle3 Drawio

Key Components

  1. Worker: Main entry point that creates an Orchestra Instrument and processes jobs
  2. Executor: Handles hashcat execution, file management, and result streaming
  3. File Manager: Downloads and caches wordlists, rules, masks, and hashlists
  4. Orchestra Integration: Manages registration, heartbeats, work pulling, and lease management

Attack Modes Supported

Tentacle supports all standard Hashcat Attack Modes:
  • Dictionary Attack (mode 0): Wordlist-based attack
  • Rule-based Attack (mode 0 + rules): Wordlist with transformation rules
  • Brute Force (mode 3): Mask-based exhaustive attack
  • Hybrid Wordlist + Mask (mode 6): Append mask patterns to wordlist entries
  • Hybrid Mask + Wordlist (mode 7): Prepend mask patterns to wordlist entries

Prerequisites

Hardware Requirements

  • GPU: NVIDIA GPU with CUDA support (tested with CUDA 13.0)
  • RAM: Minimum 8GB (16GB+ recommended for large wordlists)
  • Storage: 50GB+ for hashcat, wordlists, and temporary files
  • Network: Stable connection to Kraken server

Software Requirements

  • OS: Linux (Ubuntu 24.04 recommended)
  • NVIDIA Driver: Version 575+ (for CUDA 13.0)
  • Docker: Latest version with NVIDIA Container Toolkit
  • GPU Support: NVIDIA Container Runtime configured

Installation

Run the host preparation script to install NVIDIA drivers, Docker, and the NVIDIA Container Toolkit:

./prep_host.shThis script will:
  • Update the system packages
  • Install NVIDIA driver 575 (if not present)
  • Install Docker and enable the service
  • Install NVIDIA Container Toolkit
  • Configure Docker to use the NVIDIA runtime
  • Prompt for a system restart
Important: Reboot the system after running prep_host.sh
After reboot, verify GPU passthrough works:
docker run --rm --runtime=nvidia --gpus all nvidia/cuda:13.0.0-base-ubuntu24.04 nvidia-smi  
You should see a list of the GPU(s) in the output

Configure Environment

Create a .env by copying the .env.environment file
cp .env.environment .env
Edit the .env with your configuration:
# Orchestra Conductor address (Kraken server)  
CONDUCTOR_ADDR=krkn-server-hostname-or-ip:65535  
  
# Unique worker ID (defaults to hostname if not set)  
WORKER_ID=worker-name 
  
# Number of concurrent jobs this worker can handle  
CAPACITY=1  
  
# Directory for temporary files  
TEMP_DIR=/tmp/tentacle  
  
# Path to hashcat installation (inside container)  
HASHCAT_PATH=/opt/hashcat  
  
# Path to hashcat shared resources  
SHARED_PATH=/opt/hashcat  
  
# Optional: GPU model label for conductor scheduling  
GPU_MODEL=RTX-4090  
  
# Optional: Region label for conductor scheduling  
REGION=us-east  

Build the Docker Image

Set your GitHub token (required):
export KRKN_KEY=your_github_personal_access_token 
Build the Tentacle Docker Image:
cd temp/Tentacle/docker && chmod +x ./build.sh && ./build.sh

How Tentacle Works

Registration and Connection

When a Tentacle worker starts:
  1. Connect to Conductor: Establishes gRPC connection to Kraken server (port 65535)
  2. Register: Sends worker ID, capacity, and labels to Orchestra Conductor
  3. Receive Config: Gets heartbeat interval (default 3 seconds)
  4. Start Loops: Begins heartbeat and work-pulling loops

Work Acquisition

The worker will continuously poll for work:
  1. Pull Work: Calls PullWork RPC when capacity available
  2. Acquire Lease: Receives work item with 30-minute lease
  3. Download Files: Streams hashlist, wordlist, rules, masks from conductor
  4. Execute Hashcat: Runs attack with downloaded files
  5. Report Results: Streams cracked hashes back to conductor
  6. Release Lease: Marks work complete and frees capacity

File Management

Files are downloaded on-demand and cached in /tmp/tentacle:
  • Hashlists, wordlists, rules, and masks are streamed in chunks
  • Files are reused across jobs when possible
  • Temporary files cleaned up after job completion

Heartbeat and Lease Management

  • Heartbeat: Sent every 3 seconds to maintain connection
  • Lease Duration: 30 minutes per work item
  • Auto-Extension: Lease extended if job still running
  • Failure Recovery: If worker crashes, lease expires and work is reclaimed

Monitoring

View Logs

# Live logs  
docker logs -f tentacle-worker  
  
# Last 100 lines  
docker logs --tail 100 tentacle-worker  

Check Status

# Container status  
docker ps | grep tentacle-worker  
  
# GPU utilization  
nvidia-smi  
  
# Resource usage  
docker stats tentacle-worker  

Development Mode:

For debugging:
cd temp/Tentacle/docker./dev.sh  
This opens an interactive shell in the container.

Configuration Reference

VariableDescriptionDefaultRequired
CONDUCTOR_ADDRConductor AddressYes
WORKER_IDUnique worker IDhostnameNo
CAPACITYConcurrent jobs1No
TEMP_DIRTemp directory/tmp/tentacleNo
HASHCAT_PATHHashcat path/opt/hashcatNo
GPU_MODELGPU labelNo
REGIONRegion labelNo

Troubleshooting

This may be due to an firewall rules or the wrong port (65535). Try the following:
  1. Verify CONDUCTOR_ADDR in .env
  2. Check network: ping your-server-hostname
  3. Ensure Kraken server is running
  4. Check firewall allows port 65535
The Cuda Drivers may be out of date or an u pdate may have broken the installation.
  1. Run nvidia-smi on host
  2. Test Docker GPU: `docker run --rm --gpus all nvidia/cuda:13.0.0-base-ubuntu24.04`
  3. Restart Docker: `sudo systemctl restart docker`
  4. Rerun the prep_host.sh shell file
The wordlist+permutations may be too large:
  1. Use a smaller wordlist
  2. Use a smaller rule file
  3. Increase system RAM

Sample Work Execution

Krkn 12 Krkn 11