Tcp Listener¶
TcpListenerBase is the main TCP server foundation in Nalix.Network. It owns the listen socket, accept workers, connection limiter, object-pool setup, process-channel backpressure, timing-wheel integration, and the handoff from newly accepted sockets into Protocol.OnAccept(...).
Source mapping¶
src/Nalix.Network/Listeners/TcpListener/TcpListener.Core.cssrc/Nalix.Network/Listeners/TcpListener/TcpListener.PublicMethods.cssrc/Nalix.Network/Listeners/TcpListener/TcpListener.Handle.cssrc/Nalix.Network/Listeners/TcpListener/TcpListener.ProcessChannel.cssrc/Nalix.Network/Listeners/TcpListener/TcpListener.SocketConfig.cssrc/Nalix.Network/Listeners/TcpListener/TcpListener.Metrics.cs
Main responsibilities¶
- validate and load
NetworkSocketOptions - tune thread-pool minima on Windows when
TuneThreadPoolis enabled - create and configure the listen socket
- accept incoming sockets in parallel
- reject abusive endpoints via
ConnectionLimiter - initialize
Connectionobjects and wire their events - queue accepted connections into a bounded process channel
- invoke
Protocol.OnAccept(...)on the dedicated process thread - manage
TimingWheelactivation when idle timeout tracking is enabled
Startup flow¶
Activate(ct) currently:
- validates
MaxParallel >= 1 - transitions
STOPPED -> STARTING -> RUNNING - creates a linked cancellation source and registers
SCHEDULE_STOP() - initializes the listen socket when needed
- optionally activates
TimingWheel - schedules
MaxParallelaccept workers viaTaskManager - starts the bounded process channel thread
Accept path¶
Each accept worker runs AcceptConnectionsAsync(...):
- accepts one socket via
CreateConnectionAsync(...) - enforces
ConnectionLimiter - creates a
Connection - wires
OnCloseEvent,OnProcessEvent, andOnPostProcessEvent - registers the connection in
TimingWheelif timeout support is enabled - dispatches the connection to the process channel
Process channel backpressure¶
Accepted connections are not processed directly on accept workers. Instead:
- producers write
IConnectioninto a bounded channel - one dedicated background thread drains that channel at
BelowNormalpriority DISPATCH_CONNECTION(...)drops new writes when the channel is full- dropped connections increment rejected metrics and are closed immediately
This keeps new-connection setup from starving packet-processing callbacks.
Shutdown flow¶
Deactivate(ct):
- transitions into
STOPPING - cancels the linked CTS
- closes the listen socket
- stops the process channel
- cancels the listener worker group
- closes all active connections through
ConnectionHub - deactivates
TimingWheelwhen enabled - returns to
STOPPED
Diagnostics¶
GenerateReport() prints:
- port, state, and disposal flag
- socket configuration values
- accept/reject/error metrics
- bound protocol name
- active connection count from
ConnectionHub - current thread-pool minima
- whether time sync is enabled
Basic usage¶
var protocol = new SampleProtocol();
var listener = new SampleTcpListener(protocol);
await listener.Activate(ct);
string report = listener.GenerateReport();
Console.WriteLine(report);