PCPB-25616: Smart Internal Batching to Improve Signaling Latency

A study guide for understanding the data-plane and pfcp-endpoint repositories in the context of the smart batching feature.

Chapter 1: What Are These Services?

You're looking at two C microservices that together form the User Plane (UP) of a 3GPP Packet Core (PCG/PCC):

ServiceRoleAnalogy
data-planeForwards user traffic (payload) at wire speed using DPDKThe "fast engine" — touches every packet
pfcp-endpoint (PEP)Handles PFCP signaling from the Control Plane (SMF/CP)The "brain" — manages sessions, associations, paths
ℹ️ Key Insight
The DP does the packet forwarding. The PEP tells the DP how to forward by provisioning sessions (PDRs, FARs, QERs, URRs) into the DP via an internal interface.

Repo sizes at a glance

RepoLanguageApprox LOC (src)Key binary
data-planeC (some C++)~2M+ (huge, 90+ modules)data-plane
pfcp-endpointC (some C++)~300K (src/)pfcp-endpoint

Chapter 2: Where They Fit in PCG/PCC

┌─────────────────────────────────────────────────────────────────┐ │ PCG (Packet Core Gateway) │ │ │ │ ┌──────────┐ PFCP ┌──────────────┐ internal ┌─────┐│ │ │ SMF │◄───────────►│ pfcp-endpoint│◄────────────►│ DP ││ │ │(Control) │ │ (PEP) │ (sessions) │ ││ │ └──────────┘ └──────────────┘ │ ││ │ │ ││ │ ┌──────────┐ │ ││ │ │ NWCMA │───── config ──────────────────────────────►│ ││ │ │(CM Agent)│ │ ││ │ └──────────┘ └─────┘│ │ ▲ ▲ │ │ N3/N9│ │N6│ │ (GTP) │ │ │ │ ▼ ▼ │ │ [gNB] [DN] │ └─────────────────────────────────────────────────────────────────┘

Key interfaces

Chapter 3: The PFCP Protocol

PFCP (Packet Forwarding Control Protocol, 3GPP TS 29.244) is the protocol between the Control Plane and User Plane. As a former system tester, you've likely seen PFCP messages in traces. Here's the developer perspective:

Message types PEP handles

DirectionMessages
CP → UP (received)Association Setup/Update/Release, Heartbeat, Session Establishment/Modification/Deletion
UP → CP (sent)Association Update, Heartbeat, Session Report

Session = collection of rules

💡 For your feature
"Signaling latency" = the time from when a PFCP message arrives at the UP until the response goes back to the CP. The batching feature aims to reduce this by being smarter about how internal work is grouped and scheduled.

Chapter 4: The Feature — Smart Internal Batching (PCPB-25616)

The feature study PowerPoint is at:
/lab/epg_st_sandbox/etahris/PCPB/PCPB-25616/FS/

Problem statement

When the PEP receives a burst of PFCP session messages, it processes them and sends provisioning requests to the DP. Currently, each message may trigger individual internal operations (DB writes, mbox messages, etc.) that could be batched together to reduce overhead and latency.

Why it matters

Where to look in code

ComponentRepoRelevance
Work Managerpfcp-endpointQueues and prioritizes incoming PFCP work
Session Enginepfcp-endpointProcesses session establishment/modification
ext_adapterpfcp-endpointReceives messages from DP, feeds work manager
UPF session ctrldata-planeReceives provisioned sessions from PEP
Mailbox (mbox)data-planeInter-CPU message passing — potential batching point
Session Queuedata-planeQueues session operations per-session

Chapter 5: Data-Plane Architecture

The data-plane runs on multiple vCPUs, each assigned specific roles:

┌─────────────────────────────────────────┐ │ DATA-PLANE POD │ │ │ Packets in ──────►│ [Input]──►[Ingress]──►[Egress]──►[Output]──────► Packets out (NIC/VF) │ │ │ │ │ │ (NIC/VF) │ │ │ │ │ │ │ └──────────┴─────┬─────┴──────────┘ │ │ │ │ │ [Controller] │ │ (background) │ │ - session provisioning │ │ - config handling │ │ - metrics │ └─────────────────────────────────────────┘

Role descriptions

RoleTypeResponsibility
InputForegroundPoll NIC, parse packets, calculate flow hash, prioritize
IngressForegroundIntra-instance load balancer, flow lookup, traffic steering
EgressForegroundMain business logic: PFCP-based forwarding (PDR matching, FAR application)
OutputForegroundSend packets to NIC
ControllerBackgroundSession provisioning, config, metrics, OAM
ℹ️ Overload Protection Order
Processing priority is reverse of packet path: Output > Egress > Ingress > Input. This ensures already-started work completes before accepting new packets.

Key concepts

Source layout (data-plane/)

data-plane/
├── main/           # main.c, dp.c — the application entry point & orchestration
├── upf/            # UPF module: sessions, PDRs, FARs, QERs, URRs, DPI
├── pktio/          # Packet I/O: NIC abstraction, backends (DPDK, Linux, TAP)
├── mbox/           # Mailbox: inter-CPU message passing
├── core-loop/      # Core loop: the run-loop for each CPU role
├── protocol/       # Protocol handlers: GTP, IP/UDP, ARP, BFD, etc.
├── vrf/            # VRF (Virtual Routing & Forwarding)
├── cgnat/          # Carrier-Grade NAT
├── firewall/       # Firewall/ACL
├── itc/            # Internal Traffic Capture
├── up-common/      # Shared libraries (evl, logging, net, tls, etc.)
├── CMakeLists.txt  # Top-level build
├── Makefile        # Developer convenience targets
└── ARCHITECTURE.md # The architecture doc you should read first!

Chapter 6: PFCP-Endpoint Architecture

PEP is a single-threaded event-loop application (with a few helper threads). It's much simpler than the DP in terms of threading.

┌──────────────────────────────────────────────────────────────┐ │ PFCP-ENDPOINT (PEP) │ │ │ │ ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ ext_adapter │────►│ work_manager │────►│ pep.c │ │ │ │ (rx thread) │ │ (priority Q) │ │ (main logic) │ │ │ └─────────────┘ └──────────────┘ └──────┬───────┘ │ │ ▲ │ │ │ │ UDP ▼ │ │ from DP ┌──────────────────┐ │ │ (punter) │ session_engine │ │ │ │ association_eng │ │ │ │ path_supervisor │ │ │ └────────┬─────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────┐ │ │ │ relay to DP │ │ │ │ (UDP out) │ │ │ └──────────────────┘ │ └──────────────────────────────────────────────────────────────┘

Key source files (pfcp-endpoint/src/)

FileSizePurpose
pep_main.c94KApplication entry, TLS setup, thread creation
pep.c110KCore orchestration, module creation/wiring
pep_session_engine.c662KSession establishment/modification/deletion logic
pep_association_engine.c392KPFCP association handling
pfcp.c185KPFCP message encoding/decoding
gtp_path_supervisor.c230KGTP path management and heartbeats
pep_ctrl.c88KControl logic, start/stop
ext_adapter.c57KExternal adapter — receives from DP
⚠️ File sizes
pep_session_engine.c is 662K — that's ~15,000+ lines. Don't try to read it top-to-bottom. Use the function index and search for specific flows.

Chapter 7: How DP and PEP Communicate

Message flow: CP → UP session establishment

SMF (CP) DP PEP │ │ │ │──PFCP Session Est Req──►│ │ │ │──dp-ctrl header+msg────►│ (punter, UDP) │ │ │ │ │ │── parse PFCP │ │ │── allocate SEID │ │ │── build session (PDRs,FARs) │ │ │── provision to DP ──────►│ │ │◄── session data (FB) ────│ │ │ │ │ │ │ │── install session ───────│ │ │ │ │ │ │ │◄─ provision response ────│ │ │ │ │── build PFCP response │ │ │◄── relay (UDP) ──────────│ │ │◄─PFCP Session Est Resp──│ │

The dp-ctrl header

Messages between DP and PEP are wrapped in a private "dp-control" header containing:

PCG vs EPG steering

Chapter 8: Threading & Event Loops

PEP threading model

ℹ️ EVL = Event Loop
evl_t is the core event loop abstraction from up-common. It handles timers, deferred work, I/O events. Think of it like libuv or epoll wrapped in a nice C API. Almost everything in PEP runs on the main EVL thread.

DP threading model

💡 Why this matters for batching
The signaling path crosses thread boundaries: ext_adapter → work_manager → main EVL → session_engine → relay. Each boundary is a potential batching point. The mbox in DP is another: controller → egress for session installation.

📝 Quiz 1: Context & Architecture

1. What is the primary role of the pfcp-endpoint (PEP)?

2. Which DP role applies the main PFCP-based forwarding logic (PDR matching)?

3. What does "fast-path" mean in the data-plane?

4. How does PEP receive PFCP messages from the network?

5. What is the overload protection priority order in the DP?

Chapter 9: UPF Module (data-plane)

The upf/ directory is the heart of session handling in the data-plane. It's where PFCP sessions live after PEP provisions them.

Key files

FilePurpose
upf_session_ctrl.c (1MB!)Session controller — receives provisioning from PEP, manages session lifecycle
upf_session_engine.cEngine side — runs on egress CPUs, applies session rules to packets
upf_session_queue.hPer-session message queue (serializes operations on same session)
sx_session.c/.hThe session data structure (PDRs, FARs, QERs, URRs)
sx_session_transaction.cTransaction handling for session modifications
upf_pfcp_punter.cReceives PFCP from network, forwards to PEP
pep_adapter.cAdapter between PEP's provisioning and DP's session ctrl
upf_engine.c (623K)The main packet processing engine on egress

Session message types (from upf_session_queue.h)

UPF_SESSION_MSG_TYPES:
  PFCP                    // Generic PFCP operation
  PFCP_ESTABLISHMENT      // New session
  REPORT                  // Usage report
  PAYLOAD                 // Packet triggered re-evaluation
  INVALIDATE              // Session invalidation
  TERMINATE              // Session deletion
  GEO_ESTABLISHMENT      // Geo-redundancy
  INTERNAL_MODIFICATION  // Internal config change
  ...
💡 For batching
The session queue serializes operations per-session. If multiple messages arrive for different sessions, they can potentially be batched at the controller level before being dispatched to individual session queues.

Chapter 10: Mailbox (mbox)

The mailbox is the primary mechanism for passing messages between CPUs in the data-plane.

How it works

// mbox/include/mbox/mbox.h
typedef enum mbox_priority {
    MBOX_PRIORITY_CRITICAL,  // Highest
    MBOX_PRIORITY_HIGH,
    MBOX_PRIORITY_MID,
    MBOX_PRIORITY_LOW,       // Lowest
} mbox_priority_t;

typedef struct mbox_msg {
    struct {
        uint32_t u32_1;
        uint32_t u32_2;
        void*    ptr;
        uint64_t u64;
    } data;
} mbox_msg_t;

The mbox uses lock-free MPSC queues (Multiple Producer, Single Consumer) — multiple CPUs can send to one CPU without locks.

Controller CPU Egress CPU 0 ┌────────────┐ ┌────────────┐ │ │── mbox_send() ──►│ mbox Q │ │ session │ │ (MPSC) │ │ ctrl │ │ │ │ │── mbox_send() ──►│ processes │ └────────────┘ │ in poll │ └────────────┘ Egress CPU 1 Controller CPU ┌────────────┐ ┌────────────┐ │ │── mbox_send() ──►│ mbox Q │ │ engine │ │ (MPSC) │ │ (metrics) │ │ │ └────────────┘ └────────────┘

Relevance to batching

When the controller provisions a session, it sends mbox messages to egress CPUs. If many sessions are being provisioned simultaneously, batching these mbox messages could reduce overhead (fewer cache-line bounces, fewer wakeups).

Chapter 11: PKTIO & Roles

PKTIO (Packet I/O) is the bottom layer abstracting NIC access. It has a frontend and multiple backends:

BackendInterfaceUse
LIBPIOcarrier + poolProduction: DPDK-based I/O
LINUXcarrier + poolDevelopment: AF_PACKET
TAPcarrierTesting
NATIVEpoolDynamic packet buffers

The frontend provides hooks for:

ℹ️ PKTIO and signaling
PKTIO's punter functionality is what extracts PFCP signaling packets from the wire and forwards them to PEP. The upf_pfcp_punter.c handles this.

Chapter 12: Session Engine (PEP)

The session engine (pep_session_engine.c, 662K) is the largest file in PEP. It handles:

Key flow: Session Establishment

// Simplified flow in pep_session_engine.c:
1. Receive PFCP Session Establishment Request (from work_manager)
2. Decode PFCP IEs (PDRs, FARs, QERs, URRs)
3. Allocate UP SEID (Session Endpoint Identifier)
4. Build internal session representation
5. Encode session as FlatBuffer
6. Send to DP via session_client (provisioning)
7. Wait for DP acknowledgment
8. Build PFCP Session Establishment Response
9. Send response via relay interface back through DP to CP

Related files

Chapter 13: Work Manager (PEP)

The work manager is PEP's internal scheduler. It's critical for understanding where batching can be applied.

Priority queues (highest to lowest)

PriorityWork TypeDropped during OLP?
1Ongoing work (continuations)No
2Node messages (association, heartbeat)No
3Session report responsesNo
4Session report requestsNo
5Session deletion requestsYes
6Session modification requestsYes
7Session establishment requestsYes
8Background work (droppable)Yes
9Background workNo
⚠️ Batching opportunity
The work manager dequeues one item at a time from the highest-priority non-empty queue. A "smart batching" approach could dequeue multiple items when they're available, process them together, and send a single batched provisioning request to the DP.

Overload protection

When queues grow (ext_adapter adds faster than main thread processes), the work manager checks max queue time. If exceeded, lower-priority work is dropped. Establishments drop first, then modifications, then deletions.

📝 Quiz 2: Key Subsystems

1. What type of queue does the mailbox (mbox) use?

2. Which file contains the session establishment/modification/deletion logic in PEP?

3. In PEP's work manager, which work type has the LOWEST priority?

4. What does pep_adapter.c in the data-plane do?

5. What serialization format does PEP use to provision sessions to DP?

Chapter 14: Module Lifecycle

Both repos follow a strict module lifecycle pattern (documented in data-plane/MODULE_GUIDELINES.md):

┌──────────┐ create() ┌──────────┐ start() ┌──────────┐ │ │─────────────────►│ │─────────────────►│ │ │ (none) │ │ CREATED │ │ STARTING │ │ │ │ │ │ │ └──────────┘ └──────────┘ └────┬─────┘ │ on_started_cb() │ ▼ ┌──────────┐ delete() ┌──────────┐ stop() ┌──────────┐ │ │◄─────────────────│ │◄─────────────────│ │ │ (none) │ │ STOPPED │ │ STARTED │ │ │ │ │ │ │ └──────────┘ └──────────┘ └──────────┘

Rules

API pattern

// Every module exposes:
module_t* module_create(dependencies...);
void      module_start(module_t*, on_started_cb, cb_arg);
void      module_stop(module_t*, on_stopped_cb, cb_arg);
void      module_destroy(module_t**);

Chapter 15: Control & Engine Pattern

Components that span both background (controller) and foreground (egress) CPUs follow the Control & Engine pattern:

PartRuns onResponsibility
*_ctrl.cController CPUConfiguration, lifecycle, provisioning, metrics collection
*_engine.cEgress CPU(s)Per-packet processing, fast-path logic

Communication: Control → Engine

// Via mailbox:
mbox_send(mbox, egress_cpuid, MBOX_PRIORITY_HIGH, &msg);

// The engine polls its mbox in the core loop and processes messages
⚠️ No direct function calls control→engine
Direct calls risk race conditions. Always use mbox. Exception: metrics collection (read-only, with careful synchronization).

Examples in the codebase

Chapter 16: RCU in Data-Plane

Read-Copy-Update is used extensively to allow lock-free reads on shared data structures:

// Pattern:
// 1. Reader (egress, hot path):
rcu_read_lock();
element_t* elem = hash_table_lookup(table, key);
// use elem... (pointer valid only within rcu_read_lock/unlock)
rcu_read_unlock();

// 2. Writer (controller):
element_t* old = hash_table_lookup(table, key);
element_t* new = copy_and_modify(old);
hash_table_replace(table, key, new);
call_rcu(old, free_element);  // deferred free after all readers done
⚠️ Critical rule
Never store a pointer to an RCU-protected element for async handling! The pointer is only valid within the rcu_read_lock/unlock section.

Chapter 17: Async Config (EVL Batch Iterator)

From CONFIG_GUIDELINES.md — the DP is moving from synchronous to asynchronous configuration application:

Rules for large-number objects

Two-step config model

// Step 1: Parent builds module-specific config
module_config_t* cfg = build_module_config(raw_config);

// Step 2: Child applies it
module_set_config(module, cfg, on_done, on_done_ctx, on_done_arg);
// All deferred work started inside set_config
// on_done called exactly once when all work complete
💡 Connection to batching
This async config pattern shows how the codebase already handles "don't block the main loop" problems. The batching feature likely follows similar principles: defer and batch work to avoid blocking signaling processing.

Chapter 18: Build System

Tools

Data-plane Makefile targets

make test        # Build and run tests (fast, clang + sanitizers)
make testsan     # Build with slow sanitizers (ASAN+UBSAN)
make testcov     # Get UT/SFT coverage
make lsp         # Generate compile_commands.json for clangd
make image       # Build Docker image for system test
make lint        # clang-tidy on head commit only
make builds/san  # Sanitizer build
make builds/debug # Debug build (no optimization, good for GDB)

Building pfcp-endpoint standalone

# Using bob:
./bob/bob init-dev
./bob/bob generate:3pp
./bob/bob generate:cmake
./bob/bob build

# Or manually with CMake:
mkdir build && cd build
cmake .. -DPLATFORM=Linux_elc
make -j$(nproc)

Key CMake options

OptionDefaultPurpose
WITH_IPOS_SDKOFFEnable EPG/IPOS-specific code paths
BUILD_TESTINGONBuild unit tests and SFTs
USE_ASANOFFAddress Sanitizer
USE_TSANOFFThread Sanitizer

Chapter 19: Testing Layers

LayerLocationWhat it testsSpeed
Unit Tests (UT)tests/ut/Individual functions/modules with mocksFast (seconds)
SFT (Software Function Test)tests/sft/Full binary with simulated peersMedium (minutes)
TOADStests/toads/Integration tests in containersSlow (10+ min)
Vetotests/veto/System-level tests in K8sSlowest (hours)

PEP SFT architecture

// tests/fixture/pep_sft_fix.c — the test fixture
// Simulates:
//   - Control Plane (sends PFCP messages)
//   - Data Plane (receives provisioning, sends responses)
//   - Config (NWCMA simulator)
//   - LEP (Local Endpoint)
//   - UEIP Allocator

// tests/sft/pep_sft_sessions.c — session test cases
// tests/sft/pep_sft_associations.c — association test cases
💡 For your feature work
You'll likely write SFT tests that verify batching behavior: send multiple session establishments rapidly and verify they're processed correctly with lower latency.

Chapter 20: CI/CD Pipeline

Pipeline stages (both repos)

PipelineTriggerWhat it does
PreCodeReviewPush to GerritBuild, lint, UT, SFT, helm chart check
DropMerge to masterFull build, publish Docker image + Helm chart
PraReleasePRA (Product Release Approval) pipeline
VA2.0ScheduledVulnerability Assessment scans
SoCScheduledStructure of Code analysis

Gerrit workflow

# 1. Create branch
git checkout -b my-feature

# 2. Make changes, commit
git add -A
git commit  # Include Change-Id from commit-msg hook

# 3. Push for review
git push origin HEAD:refs/for/master

# 4. Wait for PreCodeReview (+1/-1)
# 5. Address review comments, amend
git commit --amend
git push origin HEAD:refs/for/master

# 6. Get Code-Review +2, Submit

📝 Quiz 3: Patterns, Build & Test

1. In the module lifecycle, what happens if start() can't get required resources?

2. How do Control and Engine parts communicate in the data-plane?

3. What does `make lsp` do in the data-plane repo?

4. What is the SFT test level?

5. Which CI pipeline runs when you push a commit to Gerrit?

Chapter 21: The Signaling Latency Problem

Let's trace the full signaling path and identify where latency accumulates:

Time ──────────────────────────────────────────────────────────────────► SMF sends PFCP Session Establishment Request │ ▼ [Network latency] DP receives packet on NIC │ ▼ [PKTIO processing: parse, identify as signaling] DP punts to PEP (upf_pfcp_punter → UDP to PEP) │ ▼ [UDP transit: ~negligible within pod] PEP ext_adapter receives │ ▼ [Queue into work_manager: ~negligible] PEP main thread dequeues from work_manager │ ▼ [★ PROCESSING: decode PFCP, build session, encode FlatBuffer] PEP provisions session to DP │ ▼ [UDP to DP session_client] DP receives provisioning, installs session │ ▼ [★ SESSION INSTALL: mbox to egress, RCU update] DP sends acknowledgment to PEP │ ▼ [UDP back] PEP builds PFCP response │ ▼ [Relay via UDP to DP] DP sends PFCP response to SMF │ ▼ [Network] SMF receives response Total signaling latency = sum of all ★ steps + transit

Where batching helps

When many PFCP messages arrive in a burst (e.g., during mass attach), the current model processes them one-by-one sequentially. Smart batching can:

  1. Batch DB operations: Multiple sessions writing to Redis can be pipelined
  2. Batch mbox messages: Send one batch notification to egress instead of N individual messages
  3. Batch provisioning: Send multiple sessions to DP in one request
  4. Reduce context switches: Process a batch before yielding to the event loop
ℹ️ The trade-off
Batching improves throughput but can increase latency for the first message in a batch (it waits for the batch to fill). "Smart" batching means: batch when there's a queue, don't batch when idle (don't add artificial delay).

Chapter 22: Batching Concept

Smart batching strategy

Without batching: With smart batching: msg1 → process → provision → ack msg1 ─┐ msg2 → process → provision → ack msg2 ─┼─► batch process → batch provision → ack all msg3 → process → provision → ack msg3 ─┘ msg4 → process → provision → ack msg4 ─┐ msg5 ─┼─► batch process → batch provision → ack all Total: 4 round-trips to DP msg6 ─┘ Total: 2 round-trips to DP

Key design questions for the feature

  1. Batch trigger: When to flush a batch? (queue depth? timer? both?)
  2. Batch size: Maximum messages per batch?
  3. Scope: Which operations can be batched together?
  4. Error handling: If one message in a batch fails, what happens to others?
  5. Ordering: Must messages for the same session be ordered?

Existing batching in the codebase

// pfcp-endpoint/src/pep_options.h:
pep_options_kvdb_request_batch_interval(const pep_options_t* opt);
// Already has a concept of batching DB requests!

// data-plane/upf/src/upf_session_queue.h:
// Per-session queue already serializes operations
// Multiple sessions can be processed in parallel

Chapter 23: Code Reading Plan

Here's your recommended reading order to understand the feature context:

Phase 1: Understand the architecture (Week 1)

#FileWhy
1data-plane/ARCHITECTURE.mdOverall DP architecture, roles, concepts
2data-plane/MODULE_GUIDELINES.mdModule lifecycle pattern
3data-plane/CONFIG_GUIDELINES.mdAsync config pattern (relevant to batching)
4pfcp-endpoint/docs/mad/pfcp-endpoint.mdPEP architecture, work manager, steering
5pfcp-endpoint/CONTRIBUTING.mdHow to build and contribute

Phase 2: Understand the signaling path (Week 2)

#FileWhat to look for
6pfcp-endpoint/src/pep.hMain PEP struct and create params — see all dependencies
7pfcp-endpoint/src/ext_adapter.cHow messages arrive from DP
8pfcp-endpoint/src/pep_session_engine.hSession engine interface (start with .h, not .c!)
9data-plane/upf/src/pep_adapter.c/.hDP side of the PEP↔DP interface
10data-plane/upf/src/upf_session_queue.hSession queue message types
11data-plane/mbox/include/mbox/mbox.hMailbox API

Phase 3: Understand batching points (Week 3)

#FileWhat to look for
12pfcp-endpoint/src/pep_options.hSearch for "batch" — existing batch config
13pfcp-endpoint/src/pep_session_engine.cSearch for provisioning flow (how sessions are sent to DP)
14data-plane/upf/src/upf_session_ctrl.cHow DP receives and installs sessions (search for "pep_adapter")
15data-plane/mbox/src/mbox.cMbox implementation — understand send/receive
💡 Reading strategy for huge files

Chapter 24: Day-to-Day Workflow

Setting up your environment

# 1. Generate compile_commands.json for IDE
cd /workspace/git/etahris/data-plane
make lsp

# 2. For pfcp-endpoint:
cd /workspace/git/etahris/pfcp-endpoint
./bob/bob init-dev
./bob/bob generate:3pp
./bob/bob generate:cmake
# compile_commands.json will be in the build dir

Running tests locally

# Data-plane: fast test cycle
cd /workspace/git/etahris/data-plane
make test                    # All tests
make test t=upf_session_SUITE  # Specific suite

# PEP: using bob
cd /workspace/git/etahris/pfcp-endpoint
./bob/bob build
./bob/bob test

Debugging tips

Commit message format

Short summary (max 50 chars)

Longer description of what and why (not how).
Wrap at 72 characters.

Change-Id: I1234567890abcdef  (auto-generated by hook)

Key contacts

📝 Quiz 4: Feature & Workflow

1. What is the main goal of "smart internal batching"?

2. What makes the batching "smart" vs naive?

3. Which file should you read FIRST when studying a new module?

4. What command generates compile_commands.json for the data-plane?

5. In the PEP work manager, during overload which messages are dropped FIRST?