Architecture¶
This page describes the current server-side shape of Nalix.Network as it exists in src, without the older placeholder layers and renamed runtime pieces.
Runtime map¶
At a high level, a Nalix server is built from:
- listeners that accept TCP or UDP traffic
- a protocol that decides how accepted traffic is processed
- a dispatch channel that deserializes packets and invokes handlers
- connection services that keep session state and transport helpers alive
- throttling and middleware that protect the runtime under load
flowchart LR
Client["Client"] --> Listener["TcpListenerBase / UdpListenerBase"]
Listener --> Protocol["Protocol"]
Protocol --> Dispatch["PacketDispatchChannel"]
Dispatch --> Middleware["Packet middleware"]
Middleware --> Handler["Handler controller / method"]
Handler --> Connection["Connection / ConnectionHub"]
Main server flow¶
The common TCP path looks like this:
TcpListenerBaseaccepts a socket.ConnectionLimitermay reject or ban the endpoint before full admission.- The listener creates a
Connectionand passes it toProtocol.OnAccept(...). - The protocol receives framed data and forwards packets into
PacketDispatchChannel. - Dispatch deserializes the packet, resolves metadata, runs middleware, invokes the handler, and optionally sends a result back through the connection.
The UDP path follows the same broad model, but datagrams are authenticated and mapped to session state differently inside UdpListenerBase.
Core building blocks¶
Listeners¶
TcpListenerBaseowns accept loops, backpressure, timing-wheel integration, and connection lifecycle.UdpListenerBaseowns datagram receive loops, auth validation, and UDP session handling.
Protocol¶
Protocol is the bridge between transport and dispatch. It is where accepted connections are initialized and where incoming message buffers are handed over to application dispatch.
Dispatch¶
PacketDispatchChannel is the application entry point for packets. It is responsible for:
- queueing inbound work
- deserializing packets with the packet registry
- resolving handler descriptors and metadata
- running middleware
- processing handler return values
Under the hood it uses PacketDispatchOptions<TPacket> and shard-aware worker loops.
Connection state¶
Connection and ConnectionHub provide:
- live connection lookup
- session/user correlation
- send helpers for TCP and UDP
- diagnostics and cleanup hooks
Protection and pressure control¶
The network runtime is designed to run with pressure controls enabled, not as an afterthought.
Typical pieces are:
ConnectionLimiterfor admission controlTokenBucketLimiterandPolicyRateLimiterfor request-rate protectionConcurrencyGatefor in-flight handler limitsTimingWheelfor timeout and scheduled expiry handlingTimeSynchronizerfor periodic time-sync events used by the listener layer
Dispatch pipeline¶
Conceptually, packet dispatch looks like this:
sequenceDiagram
participant L as Listener
participant P as Protocol
participant D as PacketDispatchChannel
participant M as Middleware
participant H as Handler
participant C as Connection
L->>P: accepted data / datagram
P->>D: handle packet
D->>D: deserialize + resolve metadata
D->>M: run inbound middleware
M->>H: invoke handler
H-->>D: return result or complete
D-->>C: send response if applicable
The exact middleware stack depends on your configuration and metadata. Packet attributes such as permission, timeout, concurrency, and rate limits become runtime behavior through the metadata pipeline.
Where other packages fit¶
Nalix.Commonprovides contracts and packet attributes.Nalix.FrameworkprovidesConfigurationManager,InstanceManager,TaskManager,Snowflake, and timing helpers.Nalix.Sharedprovides built-in packet types and packet registry support.Nalix.SDKis the client-side counterpart.