Overview
Kodex was an ENS (.eth) domain marketplace and analytics platform. It combined a marketplace, a registry browser, portfolio management, batch registration, automated domain sniping, AI-powered discovery, and real-time sales tracking into a single product.
The core idea was to make ENS domains discoverable. The ENS registry contains millions of names but no built-in way to search, filter, or compare them. Kodex indexed the entire registry, enriched it with marketplace data from every major NFT platform, layered on ML-based recommendations and GPT-driven categorization, and presented it all through a dense, trader-oriented interface.
The platform aggregated listings and offers from OpenSea, Blur, LooksRare, x2y2, Rarible, Element, Coinbase, Manifold, Infinity, and Flow through self-hosted Reservoir Protocol infrastructure, so users could trade .eth domains regardless of where they were listed. The project spanned fifteen repositories across five languages.
Design
Marketplace and registry
The frontend was a Next.js 13 app with a three-panel layout. The left panel held filters, category browsing, activity feeds, and a live chat. The center showed domains in either grid or table view with infinite scroll loading. The right panel displayed domain details, cart management, market analytics charts, and checkout flows.
The platform had two main browsing modes:
- Marketplace - listed (for-sale) domains from all aggregated marketplaces. Filters included price range (ETH or USD), domain length, status (buy now, has offers), sort order (price, alphabetical, last sale, highest offer, recently listed, expiring soonest), and date ranges.
- Registry - the full ENS registry, including unregistered domains. This was the discovery side of the product, letting users find available names to register rather than just browsing what was already for sale.
Both modes supported the AI-powered category filter system. Nearly every domain in the index was classified into categories (Cryptocurrency, Movies, Sports, Dictionary, Clubs, etc.) with subcategories, so users could drill into specific niches.
Domain pages
Each domain had a detail page with:
- Header showing the name, owner’s profile, like count, page view count (a popularity signal), price or highest offer or last sale, and listing/registration expiry
- Action buttons that changed based on context: buy now, make offer, sell, edit listing, cancel listing, register, or add to cart
- An offers table showing all active offers with accept/cancel actions
- Activity history tracing every transaction the domain was involved in
- Price history chart with configurable time ranges
- Categories the domain falls under
- Similar domains from the ML recommendation model (8 shown by default, up to 100 available)
User features
Authentication worked through RainbowKit wallet connection with SIWE (Sign-In with Ethereum). Once signed in, users had access to:
- Cart - a server-side shopping cart that separated domains into purchase (listed domains) and register (unregistered domains) sections. Users could batch-purchase listed domains or batch-register available ones.
- Portfolio - view owned domains, list them for sale with custom price and duration, extend registrations, transfer tokens, review received offers and sent offers
- Watchlist - liked domains persisted across sessions
- Followers - follow other users and see their activity
- Notifications - configurable alerts for bid activity, followed user activity, and other events, delivered via email
- Roll - a random domain discovery feature that surfaced a random domain matching optional filters, for serendipitous browsing
Market analytics
The right panel included a market analytics section with aggregate statistics (total sales volume, offer volume, registration counts), historical charts with duration selectors, and a category-filtered event feed showing sales, listings, and registrations in real time. Charts were rendered with D3.
State management used Redux Toolkit with redux-persist for session continuity across page loads. PostHog handled product analytics. Twemoji rendered emoji domains consistently across browsers.
API and indexing
The backend was a Rust service built on actix-web, deployed as four separate binaries: the REST API server, a blockchain event indexer, a periodic validation checker, and a data loader for bulk imports.
Blockchain indexer
The indexer subscribed to Ethereum via ethers-rs WebSocket connections, listening for ENS registrar contract events: NameRegistered, NameRenewed, NameMigrated, and Transfer. For each event, the log data was decoded, the full transaction fetched for additional context (like the registrant address and payment amount), and parameterized SQL queries batched and executed against PostgreSQL.
The indexer tracked its position at the block level so it could resume from the last processed block after restarts. Provider connection failures were handled with exponential backoff retry logic to deal with the reality of long-running WebSocket connections to Ethereum nodes.
REST API
The API served the full range of marketplace functionality:
- Domain search - plain text search ran against a ranked domains materialized view in PostgreSQL. Similar-domain search called out to the FastText ML model, then enriched the vector-similar results with database metadata (prices, listing status, categories, expiry dates) before returning them.
- Feed events - a paginated, filterable feed of marketplace events (sales, listings, registrations, transfers, offers) with sorting options and domain name filtering
- Floor prices - current floor price data for the ENS collection, filterable by category
- Total stats - aggregate marketplace statistics pulled from a pre-computed events_stats materialized view
- Domain info - real-time registration cost lookups that combined the Chainlink ETH/USD oracle price with the ENS registrar controller’s rent price calculation to show USD costs
- Domain categories - category data served from the GPT classification pipeline
- AI categories (live) - a proxy endpoint to the GPT service for on-demand batch classification of new domains
- Roll - random domain selection matching optional status filters
- User sessions - SIWE-based authentication with actix-session backed by Redis, implementing the full EIP-4361 flow (nonce generation, message signing, verification)
- Cart - server-side shopping cart operations (add, remove, list, clear) tied to authenticated sessions
- Likes and followers - per-user domain likes and user-to-user follow relationships persisted in PostgreSQL with materialized views for counts
- Notifications - user notification preferences (bid activity, followed user activity) with email delivery
- Seaport orders - domain listing and offer creation/management through the Seaport protocol, integrated with Reservoir’s order routing
API documentation was auto-generated via paperclip and served as an interactive Swagger UI. Prometheus metrics were exposed for monitoring request latency, error rates, and indexer lag.
Data model
The PostgreSQL schema evolved through over twenty migrations across the project’s lifetime. It tracked ENS domain registrations and renewals with full event metadata, Seaport order state (listings and offers with token amounts, expiry timestamps, fulfillment status), user accounts with likes, followers/following relationships, notification preferences, shopping cart state, and several materialized views: ranked domains for search performance, aggregate sales statistics, and floor price calculations.
Architectural evolution
A second-generation backend was later started, moving from actix-web to Axum and restructuring as a Cargo workspace with separate indexer, server, shared, and tracing crates. It adopted utoipa for OpenAPI generation (replacing paperclip) and tower middleware layers for request ID tracking and Prometheus metrics. This version added Solana SDK dependencies for planned multi-chain expansion beyond Ethereum, and skia-safe for server-side rendering of domain preview images and OG cards.
Bulk registration
ENS domain registration requires a two-step commit-reveal process to prevent front-running: first a hash commitment is submitted to the registrar, then after a mandatory waiting period (typically 60 seconds) the actual registration is revealed and completed. The bulk registrar contract handled both steps for arbitrary batch sizes in single transactions.
Contract evolution
The contract went through three major iterations, each optimizing further:
V1 and V110 used high-level Solidity with try/catch around each registration call. Loop variables were sized to the theoretical maximum that could fit in a block - uint16 for commits (where each commitment costs roughly 46k gas, fitting ~649 per block) and uint8 for registrations (where each costs roughly 295k gas, fitting ~101 per block). A KODEX_SECRET_PREFIX (0xb7c4033f) was introduced in V110 to identify Kodex-originated commitments on-chain. Fees were collected in basis points (100 = 1%) applied only to successful registrations within a batch, so users never paid fees on failed attempts.
V120 was a ground-up rewrite in inline assembly. The key optimizations:
- Function selectors were brute-forced to compile to
0x00000000, saving gas on the EVM’s selector dispatch (the function namesbulkCommitO3346980833andbulkRegisterX1633808270were chosen specifically for their selector values) calldatacopyand rawcallopcodes replaced Solidity’s ABI encoding/decoding overheadselfbalance()replaced storage reads for tracking ETH flow through the contract- Fee calculation was done entirely in assembly with no Solidity-level math
- The ENS registrar controller address was hardcoded as an immediate value rather than loaded from storage
The contract also decoupled from the specific registration function signature in V120 - instead of calling register() directly, it accepted pre-encoded calldata bytes, allowing it to route to either register() or registerWithConfig() on the legacy or current ENS registrar controllers.
Benchmarking
Gas optimization was a competitive concern since other platforms (notably ENS Vision) offered their own bulk registration contracts. A dedicated benchmarking harness ran gas measurements on Tenderly network forks, simulating the full commit-reveal cycle for batch sizes from 1 to 14 domains. The harness used Tenderly’s tenderly_addBalance and evm_increaseTime RPC extensions to fund test accounts and fast-forward through the commit-reveal waiting period.
A separate Next.js dashboard visualized the results as comparative line charts (Recharts), plotting gas cost against batch size for both the Kodex and ENS Vision contracts side by side. It could generate fresh reports against a local Anvil fork or display previously saved benchmark data.
From the benchmark data at 14 domains per batch: the Kodex V120 contract used roughly 336k gas versus ENS Vision’s 354k - about 5% less gas, with the savings scaling linearly as batch size increased. At single-domain registration the overhead was slightly higher due to the assembly dispatch setup, but the crossover happened quickly and the gap widened from there.
Domain sniping
A Rust service watched the Ethereum mempool to race ENS domain registrations. Users created snipe orders specifying a target domain name and a maximum price they were willing to pay. The system ran several separate processes:
- Mempool watcher - subscribed to pending transactions, filtered them for ENS registration method IDs, checked if the domain name in the transaction matched any active snipe orders, validated that the domain was still available on-chain, confirmed that a pre-computed commit hash was active and hadn’t expired, verified the current premium price was within the order’s budget, and if all checks passed, submitted a competing registration transaction
- Upkeep - continuously maintained commit hashes for all active snipe orders so they stayed valid. ENS commits expire after a time window, so the upkeep process re-committed as needed to ensure the sniper could always register immediately when a target appeared
- Validator - verified order state consistency, cleaned up completed or expired orders, and reconciled on-chain state with the database
Order state (pending, committed, sniped, failed) was tracked in PostgreSQL with a full audit trail. The system maintained pre-computed commit salts per order so registration transactions could be constructed and submitted with minimal latency - the difference between winning and losing a snipe often came down to a few hundred milliseconds.
Discovery
Similar domains
A compressed FastText word vectorization model served via Flask powered the “similar domain” feature. Given a domain name and optional category terms, it computed a vector representation and returned the closest matches by cosine distance.
When categories were provided, the model applied a projection-based adjustment to the search vector. It computed the category vector, subtracted the component of the input vector that aligned with the category direction, and added the full category vector. This shifted the similarity space toward domains that matched both the semantic meaning of the input name and the requested category, without requiring separate models per category.
The model was pre-trained on the full set of indexed domain names and compressed for production serving with gunicorn behind Kubernetes.
Categorization
A GPT-based pipeline classified domains into weighted categories. The training dataset mapped real-world entities to their ENS-style domain names - movie titles (“Batman: The Dark Knight” mapped to ["batman", "darkknight"]), sports teams, crypto projects, dictionary words, and more - each with manually assigned category-confidence scores (e.g. Cryptocurrency: 100, Technology: 90, Investment: 50).
The OpenAI API was prompted with these examples to classify arbitrary batches of domains, returning category-confidence pairs. The results were stored and served through the API’s category endpoints, powering the marketplace and registry category filters. Categories included things like Cryptocurrency, Movies, Sports, Dictionary, Brands, Animals, and many subcategories within each.
Order aggregation
The ENS domain market was fragmented across many platforms. A domain might be listed on OpenSea, have offers on LooksRare, and show different prices on Blur - with no single place to see everything. Kodex ran self-hosted Reservoir Protocol infrastructure to solve this.
An indexer node synced the on-chain NFT order book, tracking Seaport contract events and ENS token ownership state. A separate relayer service aggregated listings and offers from ten marketplaces: OpenSea (across Seaport v1.1 through v1.5), Blur, LooksRare, x2y2, Rarible, Element, Coinbase, Manifold, Infinity, and Flow.
Each marketplace had dedicated sync workers with BullMQ job queues handling both real-time streaming (watching for new orders as they appeared) and historical backfill (catching up on orders placed before the relayer started). Orders from all sources were normalized into a common format in PostgreSQL, so the frontend could show a unified order book regardless of which marketplace originated the listing. When a user purchased a domain through Kodex, the transaction was routed through Reservoir’s order router, which handled the specifics of fulfilling orders on whichever protocol held the listing.
Sales tracking
A TypeScript service monitored on-chain ENS domain sales in real time. It listened for ERC-721 Transfer events on the ENS base registrar contract, then decoded the surrounding transaction to determine which marketplace facilitated the sale. Separate parsers handled the event formats from Seaport (OpenSea), SudoSwap, NFTTrader, and aggregators like Gem and Genie.
For each detected sale, the service extracted the sale price (handling multiple currencies via a configurable currency map), buyer and seller addresses, and token ID. Sales were then formatted into rich embeds with domain name, price in ETH and USD, buyer/seller links, and marketplace branding (each marketplace had its own color, icon, and display name), then posted to Discord (via webhook), Telegram, and Twitter. Swap transactions (via NFTTrader) received special handling with generated GIF images showing both sides of the trade.
Kodex-facilitated Seaport orders were identified and handled separately from generic marketplace sales, using the Kodex market branding in the embed.
ENS management
A customized fork of the official ENS Manager App (ens-app-v3) served as a demo for the ENS team, showing how Kodex’s backend could be integrated directly into the official site. The fork wired the standard ENS management experience - setting records, transferring names, managing subdomains, configuring resolvers - to Kodex’s API endpoints and added support for Kodex-specific order types. The goal was to demonstrate a partnership model where Kodex would serve as the marketplace and data backend for the official ENS app itself.
Documentation
A Docusaurus site served both user guides and developer documentation. The user guides covered every feature of the marketplace, registry, domain pages, cart flows (purchasing, offering, registering), and filter system. The developer section documented the Seaport order integration (how to identify Kodex orders, listing and offer schemas), subgraph APIs (both the Kodex ENS subgraph and the Seaport subgraph with query examples), and the Reservoir integration. An OpenAPI spec was included for the REST API.
The site also had a hiring section with detailed role descriptions for data engineers, backend engineers, frontend engineers, and full-stack engineers.
Infrastructure
The platform ran on AWS EKS, managed through Terraform. The infrastructure configuration covered the EKS cluster with managed node groups, ECR repositories for container images, ACM certificates, ALB ingress, and IAM roles for both cluster-level and container-level permissions.
ArgoCD handled GitOps-based continuous deployment, syncing from the k8s-manifests directories present in each service repository. Cloudflare tunnels provided ingress, with both WARP tunnels (for internal access) and standard ingress tunnels (for public traffic) configured as reusable Terraform modules.
The cluster also hosted a ClickHouse instance for analytics data, EC2 snapshot policies for backup, and self-hosted GitHub Actions runners for CI. Services were containerized with separate Dockerfiles per role - the main API alone shipped four images (service, indexer, cron, checker), and the sniper bot had its own set. Infrastructure secrets were managed through git-crypt.
Tech stack
| Layer | Technologies |
|---|---|
| Frontend | Next.js 13, TypeScript, React, Redux Toolkit, Tailwind CSS, RainbowKit, wagmi, viem, D3, PostHog |
| API | Rust, actix-web, Axum, sqlx, PostgreSQL, ethers-rs, Redis, Prometheus |
| Smart contracts | Solidity, Foundry, inline assembly |
| Sniping | Rust, ethers-rs, PostgreSQL |
| ML / AI | Python, Flask, FastText, OpenAI API |
| Order aggregation | TypeScript, Node.js, PostgreSQL, Redis, BullMQ (Reservoir Protocol) |
| Sales tracking | TypeScript, Node.js, ethers.js, Discord/Telegram/Twitter APIs |
| ENS management | React, TypeScript (fork of ENS App V3) |
| Documentation | Docusaurus, OpenAPI |
| Infrastructure | AWS EKS, Terraform, ArgoCD, Cloudflare, Docker, Kubernetes, git-crypt |