pw_rpc#
The pw_rpc
module provides a system for defining and invoking remote
procedure calls (RPCs) on a device.
This document discusses the pw_rpc
protocol and its C++ implementation.
pw_rpc
implementations for other languages are described in their own
documents:
Try it out!
For a quick intro to pw_rpc
, see the
pw_hdlc: RPC over HDLC example project in the pw_hdlc module.
Warning
This documentation is under construction. Many sections are outdated or incomplete. The content needs to be reorgnanized.
Implementations#
Pigweed AI summary: Pigweed offers various client and server implementations of pw_rpc in languages such as C++, Java, Python, and TypeScript. However, pw_protobuf and nanopb RPC services cannot coexist in the same RPC server, which limits the ability to migrate services from one protobuf implementation to another.
Pigweed provides several client and server implementations of pw_rpc
.
Language |
Server |
Client |
---|---|---|
C++ (raw) |
✅ |
✅ |
C++ (Nanopb) |
✅ |
✅ |
C++ (pw_protobuf) |
✅ |
✅ |
Java |
✅ |
|
Python |
✅ |
|
TypeScript |
in development |
Warning
pw_protobuf
and nanopb
RPC services cannot currently coexist within the
same RPC server. Unless you are running multiple RPC servers, you cannot
incrementally migrate services from one protobuf implementation to another,
or otherwise mix and match. See
Issue 234874320.
RPC semantics#
Pigweed AI summary: The semantics of pw_rpc are similar to gRPC. An RPC begins when the client sends an initial packet, and the server receives the packet, looks up the relevant service method, then calls into the RPC function. The RPC is considered active until the server sends a status to finish the RPC, and the client may terminate an ongoing RPC by cancelling it. There are four types of RPCs: Unary, Server streaming, Client streaming, and Bidirectional streaming. The key events in the RPC lifecycle are
The semantics of pw_rpc
are similar to gRPC.
RPC call lifecycle#
Pigweed AI summary: This passage describes the lifecycle of an RPC call in the pw_rpc system. The RPC begins when the client sends a packet, and the server looks up the relevant service method and calls into the RPC function. The RPC is active until the server sends a status to finish it, and the client can cancel an ongoing RPC. There are four types of RPCs: unary, server streaming, client streaming, and bidirectional streaming. The key events in the RPC lifecycle include start, finish, request, response
In pw_rpc
, an RPC begins when the client sends an initial packet. The server
receives the packet, looks up the relevant service method, then calls into the
RPC function. The RPC is considered active until the server sends a status to
finish the RPC. The client may terminate an ongoing RPC by cancelling it.
Multiple RPC requests to the same method may be made simultaneously.
Depending the type of RPC, the client and server exchange zero or more protobuf request or response payloads. There are four RPC types:
Unary. The client sends one request and the server sends one response with a status.
Server streaming. The client sends one request and the server sends zero or more responses followed by a status.
Client streaming. The client sends zero or more requests and the server sends one response with a status.
Bidirectional streaming. The client sends zero or more requests and the server sends zero or more responses followed by a status.
Events#
Pigweed AI summary: This section describes the key events in the RPC lifecycle, including the start and finish of the RPC, the sending and receiving of request and response protobufs, abnormal termination with a status, and the completion of the request call. It also notes that in some cases, servers may ignore the request completion message.
The key events in the RPC lifecycle are:
Start. The client initiates the RPC. The server’s RPC body executes.
Finish. The server sends a status and completes the RPC. The client calls a callback.
Request. The client sends a request protobuf. The server calls a callback when it receives it. In unary and server streaming RPCs, there is only one request and it is handled when the RPC starts.
Response. The server sends a response protobuf. The client calls a callback when it receives it. In unary and client streaming RPCs, there is only one response and it is handled when the RPC completes.
Error. The server or client terminates the RPC abnormally with a status. The receiving endpoint calls a callback.
Request Completion. The client sends a message that it would like to request call completion. The server calls a callback when it receives it. Some servers may ignore the request completion message. In client and bidirectional streaming RPCs, this also indicates that client has finished sending requests.
Status codes#
Pigweed AI summary: The article discusses the use of status codes in pw_rpc call objects such as ClientReaderWriter and ServerReaderWriter. These codes are returned from functions like Write() or Finish() to indicate what occurred during the operation. The article lists three status codes: OK, UNAVAILABLE, and UNKNOWN, and explains what each code means.
pw_rpc
call objects (ClientReaderWriter
, ServerReaderWriter
, etc.)
use certain status codes to indicate what occurred. These codes are returned
from functions like Write()
or Finish()
.
OK
– The operation succeeded.UNAVAILABLE
– The channel is not currently registered with the server or client.UNKNOWN
– Sending a packet failed due to an unrecoverablepw::rpc::ChannelOutput::Send()
error.
Unrequested responses#
Pigweed AI summary: The pw_rpc system allows for sending responses to RPCs that have not yet been invoked by a client, which is useful for testing and situations like triggering a reboot. The C++ API for opening a server reader/writer takes the generated RPC function as a template parameter, and the appropriate reader/writer class must be used for each RPC type. An example code block is provided for opening a ServerWriter for a server streaming RPC and sending responses before finishing the RPC.
pw_rpc
supports sending responses to RPCs that have not yet been invoked by
a client. This is useful in testing and in situations like an RPC that triggers
reboot. After the reboot, the device opens the writer object and sends its
response to the client.
The C++ API for opening a server reader/writer takes the generated RPC function as a template parameter. The server to use, channel ID, and service instance are passed as arguments. The API is the same for all RPC types, except the appropriate reader/writer class must be used.
// Open a ServerWriter for a server streaming RPC.
auto writer = RawServerWriter::Open<pw_rpc::raw::ServiceName::MethodName>(
server, channel_id, service_instance);
// Send some responses, even though the client has not yet called this RPC.
CHECK_OK(writer.Write(encoded_response_1));
CHECK_OK(writer.Write(encoded_response_2));
// Finish the RPC.
CHECK_OK(writer.Finish(OkStatus()));
Creating an RPC#
1. RPC service declaration#
Pigweed AI summary: Pigweed RPCs are declared in a protocol buffer service definition, which is declared in a BUILD.gn file. It is recommended to use proto3 syntax rather than proto2 for new protocol buffers. Proto3 now supports optional fields, which are equivalent to proto2 optional fields. If field presence detection is not needed, use plain fields instead of optional fields.
Pigweed RPCs are declared in a protocol buffer service definition.
syntax = "proto3";
package foo.bar;
message Request {}
message Response {
int32 number = 1;
}
service TheService {
rpc MethodOne(Request) returns (Response) {}
rpc MethodTwo(Request) returns (stream Response) {}
}
This protocol buffer is declared in a BUILD.gn
file as follows:
import("//build_overrides/pigweed.gni")
import("$dir_pw_protobuf_compiler/proto.gni")
pw_proto_library("the_service_proto") {
sources = [ "foo_bar/the_service.proto" ]
}
proto2 or proto3 syntax?
Always use proto3 syntax rather than proto2 for new protocol buffers. Proto2
protobufs can be compiled for pw_rpc
, but they are not as well supported
as proto3. Specifically, pw_rpc
lacks support for non-zero default values
in proto2. When using Nanopb with pw_rpc
, proto2 response protobufs with
non-zero field defaults should be manually initialized to the default struct.
In the past, proto3 was sometimes avoided because it lacked support for field
presence detection. Fortunately, this has been fixed: proto3 now supports
optional
fields, which are equivalent to proto2 optional
fields.
If you need to distinguish between a default-valued field and a missing field,
mark the field as optional
. The presence of the field can be detected
with std::optional
, a HasField(name)
, or has_<field>
member,
depending on the library.
Optional fields have some overhead — if using Nanopb, default-valued fields
are included in the encoded proto, and the proto structs have a
has_<field>
flag for each optional field. Use plain fields if field
presence detection is not needed.
syntax = "proto3";
message MyMessage {
// Leaving this field unset is equivalent to setting it to 0.
int32 number = 1;
// Setting this field to 0 is different from leaving it unset.
optional int32 other_number = 2;
}
2. RPC code generation#
pw_rpc
generates a C++ header file for each .proto
file. This header is
generated in the build output directory. Its exact location varies by build
system and toolchain, but the C++ include path always matches the sources
declaration in the pw_proto_library
. The .proto
extension is replaced
with an extension corresponding to the protobuf library in use.
Protobuf libraries |
Build subtarget |
Protobuf header |
pw_rpc header |
---|---|---|---|
Raw only |
.raw_rpc |
(none) |
.raw_rpc.pb.h |
Nanopb or raw |
.nanopb_rpc |
.pb.h |
.rpc.pb.h |
pw_protobuf or raw |
.pwpb_rpc |
.pwpb.h |
.rpc.pwpb.h |
For example, the generated RPC header for "foo_bar/the_service.proto"
is
"foo_bar/the_service.rpc.pb.h"
for Nanopb or
"foo_bar/the_service.raw_rpc.pb.h"
for raw RPCs.
The generated header defines a base class for each RPC service declared in the
.proto
file. A service named TheService
in package foo.bar
would
generate the following base class for pw_protobuf:
-
template<typename Implementation>
class foo::bar::pw_rpc::pwpb::TheService::Service#
3. RPC service definition#
Pigweed AI summary: This section explains how to implement a service class by inheriting from the generated RPC service base class and defining a method for each RPC. The generated code includes RPC service implementation stubs that can be referenced or copied to get started with implementing a service. The section also provides instructions on how to use the stubs and how to declare a pw_protobuf implementation of the service in a BUILD.gn file.
The serivce class is implemented by inheriting from the generated RPC service base class and defining a method for each RPC. The methods must match the name and function signature for one of the supported protobuf implementations. Services may mix and match protobuf implementations within one service.
Tip
The generated code includes RPC service implementation stubs. You can reference or copy and paste these to get started with implementing a service. These stub classes are generated at the bottom of the pw_rpc proto header.
To use the stubs, do the following:
Locate the generated RPC header in the build directory. For example:
find out/ -name <proto_name>.rpc.pwpb.h
Scroll to the bottom of the generated RPC header.
Copy the stub class declaration to a header file.
Copy the member function definitions to a source file.
Rename the class or change the namespace, if desired.
List these files in a build target with a dependency on the
pw_proto_library
.
A pw_protobuf implementation of this service would be as follows:
#include "foo_bar/the_service.rpc.pwpb.h"
namespace foo::bar {
class TheService : public pw_rpc::pwpb::TheService::Service<TheService> {
public:
pw::Status MethodOne(const Request::Message& request,
Response::Message& response) {
// implementation
response.number = 123;
return pw::OkStatus();
}
void MethodTwo(const Request::Message& request,
ServerWriter<Response::Message>& response) {
// implementation
response.Write({.number = 123});
}
};
} // namespace foo::bar
The pw_protobuf implementation would be declared in a BUILD.gn
:
import("//build_overrides/pigweed.gni")
import("$dir_pw_build/target_types.gni")
pw_source_set("the_service") {
public_configs = [ ":public" ]
public = [ "public/foo_bar/service.h" ]
public_deps = [ ":the_service_proto.pwpb_rpc" ]
}
4. Register the service with a server#
Pigweed AI summary: This section explains how to set up an RPC server with an HDLC channel output and an example service. It includes example code for configuring the output channel and allocating channels for the server to use. The code also registers the example service and another service with the server, and declares a buffer for decoding incoming HDLC frames. Finally, it starts the server and processes incoming packets.
This example code sets up an RPC server with an HDLC channel output and the example service.
// Set up the output channel for the pw_rpc server to use. This configures the
// pw_rpc server to use HDLC over UART; projects not using UART and HDLC must
// adapt this as necessary.
pw::stream::SysIoWriter writer;
pw::rpc::FixedMtuChannelOutput<kMaxTransmissionUnit> hdlc_channel_output(
writer, pw::hdlc::kDefaultRpcAddress, "HDLC output");
// Allocate an array of channels for the server to use. If dynamic allocation
// is enabled (PW_RPC_DYNAMIC_ALLOCATION=1), the server can be initialized
// without any channels, and they can be added later.
pw::rpc::Channel channels[] = {
pw::rpc::Channel::Create<1>(&hdlc_channel_output)};
// Declare the pw_rpc server with the HDLC channel.
pw::rpc::Server server(channels);
foo::bar::TheService the_service;
pw::rpc::SomeOtherService some_other_service;
void RegisterServices() {
// Register the foo.bar.TheService example service and another service.
server.RegisterService(the_service, some_other_service);
}
int main() {
// Set up the server.
RegisterServices();
// Declare a buffer for decoding incoming HDLC frames.
std::array<std::byte, kMaxTransmissionUnit> input_buffer;
PW_LOG_INFO("Starting pw_rpc server");
pw::hdlc::ReadAndProcessPackets(server, input_buffer);
}
Channels#
Pigweed AI summary: The "pw_rpc" system uses channels to route packets, which are logical application-layer routes. Channels over a client-server connection must have a unique ID, which can be assigned statically or dynamically. Channels can be created with a static ID defined within an enum or with a dynamic ID. New channels can be registered with the "OpenChannel" function, and closed with the "ChannelClose" function.
pw_rpc
sends all of its packets over channels. These are logical,
application-layer routes used to tell the RPC system where a packet should go.
Channels over a client-server connection must all have a unique ID, which can be assigned statically at compile time or dynamically.
// Creating a channel with the static ID 3.
pw::rpc::Channel static_channel = pw::rpc::Channel::Create<3>(&output);
// Grouping channel IDs within an enum can lead to clearer code.
enum ChannelId {
kUartChannel = 1,
kSpiChannel = 2,
};
// Creating a channel with a static ID defined within an enum.
pw::rpc::Channel another_static_channel =
pw::rpc::Channel::Create<ChannelId::kUartChannel>(&output);
// Creating a channel with a dynamic ID (note that no output is provided; it
// will be set when the channel is used.
pw::rpc::Channel dynamic_channel;
Sometimes, the ID and output of a channel are not known at compile time as they depend on information stored on the physical device. To support this use case, a dynamically-assignable channel can be configured once at runtime with an ID and output.
// Create a dynamic channel without a compile-time ID or output.
pw::rpc::Channel dynamic_channel;
void Init() {
// Called during boot to pull the channel configuration from the system.
dynamic_channel.Configure(GetChannelId(), some_output);
}
Adding and removing channels#
Pigweed AI summary: This section explains how to add and remove channels in an RPC endpoint using the OpenChannel function. If dynamic allocation is enabled, any number of channels can be registered, but if it's disabled, new channels can only be registered if there are available channel slots. Channels can be closed and unregistered by calling ChannelClose on the endpoint with the corresponding channel ID, which will terminate any pending calls and call their on_error callback with the ABORTED status. An example code snippet is provided for closing a
New channels may be registered with the OpenChannel
function. If dynamic
allocation is enabled (PW_RPC_DYNAMIC_ALLOCATION
is 1), any number of
channels may be registered. If dynamic allocation is disabled, new channels may
only be registered if there are availale channel slots in the span provided to
the RPC endpoint at construction.
A channel may be closed and unregistered with an endpoint by calling
ChannelClose
on the endpoint with the corresponding channel ID. This
will terminate any pending calls and call their on_error
callback
with the ABORTED
status.
// When a channel is closed, any pending calls will receive
// on_error callbacks with ABORTED status.
client->CloseChannel(1);
Services#
Pigweed AI summary: The "Services" section explains that a service is a group of RPCs defined in a .proto file, and that the pw_rpc tool generates code for user-defined RPCs based on these definitions. The tool supports multiple protobuf libraries, and services must be registered with a server to call their methods. Services can also be unregistered, which stops calls to their methods until they are re-registered. More information on the pw_rpc tool's protobuf library APIs can be found in the referenced module.
A service is a logical grouping of RPCs defined within a .proto file. pw_rpc
uses these .proto definitions to generate code for a base service, from which
user-defined RPCs are implemented.
pw_rpc
supports multiple protobuf libraries, and the generated code API
depends on which is used.
Services must be registered with a server in order to call their methods. Services may later be unregistered, which aborts calls for methods in that service and prevents future calls to them, until the service is re-registered.
Protobuf library APIs#
Pigweed AI summary: This paragraph describes the Protobuf library APIs, which are part of the pw_rpc-protobuf-library-apis module. The APIs are documented in the pw_rpc/pwpb/docs and pw_rpc/nanopb/docs files, which can be accessed through a toctree wrapper.
Testing a pw_rpc integration#
Pigweed AI summary: This section explains how to test a pw_rpc server in a project by registering the provided EchoService, which echoes back a message that it receives. It includes a code example in C++ with pw_protobuf. The section also mentions that pw_rpc provides an RPC service and Python module for stress testing and benchmarking a pw_rpc deployment, with a reference to the pw_rpc Benchmarking page.
After setting up a pw_rpc
server in your project, you can test that it is
working as intended by registering the provided EchoService
, defined in
echo.proto
, which echoes back a message that it receives.
syntax = "proto3";
package pw.rpc;
option java_package = "dev.pigweed.pw_rpc.proto";
service EchoService {
rpc Echo(EchoMessage) returns (EchoMessage) {}
}
message EchoMessage {
string msg = 1;
}
For example, in C++ with pw_protobuf:
#include "pw_rpc/server.h"
// Include the apporpriate header for your protobuf library.
#include "pw_rpc/echo_service_pwpb.h"
constexpr pw::rpc::Channel kChannels[] = { /* ... */ };
static pw::rpc::Server server(kChannels);
static pw::rpc::EchoService echo_service;
void Init() {
server.RegisterService(echo_service);
}
Benchmarking and stress testing#
Pigweed AI summary: The pw_rpc module offers an RPC service and Python module for stress testing and benchmarking a pw_rpc deployment. For more information, refer to the pw_rpc Benchmarking section.
pw_rpc
provides an RPC service and Python module for stress testing and
benchmarking a pw_rpc
deployment. See pw_rpc Benchmarking.
Naming#
Pigweed AI summary: The "pw_rpc" framework reserves certain service method names for generated classes, including "Client", "Service", and any reserved words in supported languages. However, it does not reserve any service names. Service names should use capitalized camel case and should not include the term "Service", as it is redundant. The C++ implementation class may use "Service" in its name. This naming style is a requirement for upstream Pigweed services and a suggestion for Pigweed users.
Reserved names#
Pigweed AI summary: The "pw_rpc" service reserves certain method names for generated classes and prohibits their use for other service methods. These names include "Client", "Service", and any reserved words in the languages supported by "pw_rpc". However, there are no restrictions on service names themselves.
pw_rpc
reserves a few service method names so they can be used for generated
classes. The following names cannnot be used for service methods:
Client
Service
Any reserved words in the languages
pw_rpc
supports (e.g.class
).
pw_rpc
does not reserve any service names, but the restriction of avoiding
reserved words in supported languages applies.
Service naming style#
Pigweed AI summary: This section provides guidelines for naming services in Pigweed. Service names should use capitalized camel case and should not include the term "Service" as it is redundant. The C++ implementation class may use "Service" in its name. An example of a service for accessing a file system is given. For upstream Pigweed services, this naming style is a requirement, but for Pigweed users, it is only a suggestion.
pw_rpc
service names should use capitalized camel case and should not use
the term “Service”. Appending “Service” to a service name is redundant, similar
to appending “Class” or “Function” to a class or function name. The
C++ implementation class may use “Service” in its name, however.
For example, a service for accessing a file system should simply be named
service FileSystem
, rather than service FileSystemService
, in the
.proto
file.
// file.proto
package pw.file;
service FileSystem {
rpc List(ListRequest) returns (stream ListResponse);
}
The C++ service implementation class may append “Service” to the name.
// file_system_service.h
#include "pw_file/file.raw_rpc.pb.h"
namespace pw::file {
class FileSystemService : public pw_rpc::raw::FileSystem::Service<FileSystemService> {
void List(ConstByteSpan request, RawServerWriter& writer);
};
} // namespace pw::file
For upstream Pigweed services, this naming style is a requirement. Note that some services created before this was established may use non-compliant names. For Pigweed users, this naming style is a suggestion.
C++ payload sizing limitations#
Pigweed AI summary: The size of each sent RPC request or response in C++ is limited by the configuration option of pw_rpc's PW_RPC_ENCODING_BUFFER_SIZE_BYTES. If a single message exceeds this limit, it will fail to encode and be dropped. This applies to all C++ RPC service implementations, so it's important to ensure message sizes do not exceed this limitation. A helper function, pw::rpc::MaxSafePayloadSize(), is provided to expose the practical max RPC message payload size. The code example shows how
The individual size of each sent RPC request or response is limited by
pw_rpc
’s PW_RPC_ENCODING_BUFFER_SIZE_BYTES
configuration option when
using Pigweed’s C++ implementation. While multiple RPC messages can be enqueued
(as permitted by the underlying transport), if a single individual sent message
exceeds the limitations of the statically allocated encode buffer, the packet
will fail to encode and be dropped.
This applies to all C++ RPC service implementations (nanopb, raw, and pwpb), so it’s important to ensure request and response message sizes do not exceed this limitation.
As pw_rpc
has some additional encoding overhead, a helper,
pw::rpc::MaxSafePayloadSize()
is provided to expose the practical max RPC
message payload size.
#include "pw_file/file.raw_rpc.pb.h"
#include "pw_rpc/channel.h"
namespace pw::file {
class FileSystemService : public pw_rpc::raw::FileSystem::Service<FileSystemService> {
public:
void List(ConstByteSpan request, RawServerWriter& writer);
private:
// Allocate a buffer for building proto responses.
static constexpr size_t kEncodeBufferSize = pw::rpc::MaxSafePayloadSize();
std::array<std::byte, kEncodeBufferSize> encode_buffer_;
};
} // namespace pw::file
Protocol description#
Pigweed AI summary: Pigweed RPC servers and clients use pw_rpc packets to communicate. These packets are used for sending requests and responses, controlling streams, canceling ongoing RPCs, and reporting errors. The packets are encoded as protocol buffers and consist of a type and a set of fields. The full packet format is described in the pw_rpc/internal/packet.proto file. The packet type and RPC type determine which fields are present in a Pigweed RPC packet. There are different types of packets for client-to-server communication
Pigweed RPC servers and clients communicate using pw_rpc
packets. These
packets are used to send requests and responses, control streams, cancel ongoing
RPCs, and report errors.
Packet format#
Pigweed AI summary: Pigweed RPC packets are encoded as protocol buffers and consist of a type and a set of fields. The packet format is described in detail in the pw_rpc/pw_rpc/internal/packet.proto file. The packet type and RPC type determine which fields are present in a Pigweed RPC packet. Each packet type is only sent by either the client or the server. The tables provided in the document describe the meaning of and fields included with each packet type. The document also provides a list of client and
Pigweed RPC packets consist of a type and a set of fields. The packets are
encoded as protocol buffers. The full packet format is described in
pw_rpc/pw_rpc/internal/packet.proto
.
syntax = "proto3";
package pw.rpc.internal;
option java_package = "dev.pigweed.pw_rpc.internal";
enum PacketType {
// To simplify identifying the origin of a packet, client-to-server packets
// use even numbers and server-to-client packets use odd numbers.
// Client-to-server packets
// The client invokes an RPC. Always the first packet.
REQUEST = 0;
// A message in a client stream. Always sent after a REQUEST and before a
// CLIENT_REQUEST_COMPLETION.
CLIENT_STREAM = 2;
// The client received a packet for an RPC it did not request.
CLIENT_ERROR = 4;
// Client has requested for call completion. In client streaming and
// bi-directional streaming RPCs, this also indicates that the client is done
// with sending requests.
CLIENT_REQUEST_COMPLETION = 8;
// Server-to-client packets
// The RPC has finished.
RESPONSE = 1;
// The server was unable to process a request.
SERVER_ERROR = 5;
// A message in a server stream.
SERVER_STREAM = 7;
// Reserve field numbers for deprecated PacketTypes.
reserved 3; // SERVER_STREAM_END (equivalent to RESPONSE now)
reserved 6; // CANCEL (replaced by CLIENT_ERROR with status CANCELLED)
}
message RpcPacket {
// The type of packet. Determines which other fields are used.
PacketType type = 1;
// Channel through which the packet is sent.
uint32 channel_id = 2;
// Hash of the fully-qualified name of the service with which this packet is
// associated. For RPC packets, this is the service that processes the packet.
fixed32 service_id = 3;
// Hash of the name of the method which should process this packet.
fixed32 method_id = 4;
// The packet's payload, which is an encoded protobuf.
bytes payload = 5;
// Status code for the RPC response or error.
uint32 status = 6;
// Unique identifier for the call that initiated this RPC. Optionally set by
// the client in the initial request and sent in all subsequent client
// packets; echoed by the server.
uint32 call_id = 7;
}
The packet type and RPC type determine which fields are present in a Pigweed RPC packet. Each packet type is only sent by either the client or the server. These tables describe the meaning of and fields included with each packet type.
Client-to-server packets#
Pigweed AI summary: This section describes the different types of packets that can be sent from a client to a server, including REQUEST, CLIENT_STREAM, CLIENT_REQUEST_COMPLETION, and CLIENT_ERROR. The CLIENT_ERROR packet is sent by the client to the server when it receives a packet it did not request, and the server should abort it. The status code indicates the type of error, and all status codes result in the same action by the server: aborting the RPC. The section also lists different types of errors,
packet type |
description |
---|---|
REQUEST |
Invoke an RPC - channel_id
- service_id
- method_id
- payload
(unary & server streaming only)
- call_id (optional)
|
CLIENT_STREAM |
Message in a client stream - channel_id
- service_id
- method_id
- payload
- call_id (if set in REQUEST)
|
CLIENT_REQUEST_COMPLETION |
Client requested stream completion - channel_id
- service_id
- method_id
- call_id (if set in REQUEST)
|
CLIENT_ERROR |
Abort an ongoing RPC - channel_id
- service_id
- method_id
- status
- call_id (if set in REQUEST)
|
Client errors
The client sends CLIENT_ERROR
packets to a server when it receives a packet
it did not request. If possible, the server should abort it.
The status code indicates the type of error. The status code is logged, but all status codes result in the same action by the server: aborting the RPC.
CANCELLED
– The client requested that the RPC be cancelled.ABORTED
– The RPC was aborted due its channel being closed.NOT_FOUND
– Received a packet for a service method the client does not recognize.FAILED_PRECONDITION
– Received a packet for a service method that the client did not invoke.DATA_LOSS
– Received a corrupt packet for a pending service method.INVALID_ARGUMENT
– The server sent a packet type to an RPC that does not support it (aSERVER_STREAM
was sent to an RPC with no server stream).UNAVAILABLE
– Received a packet for an unknown channel.
Server-to-client packets#
Pigweed AI summary: This section describes the different types of packets that can be sent from a server to a client in an RPC (Remote Procedure Call) system. The packets include RESPONSE, SERVER_STREAM, and SERVER_ERROR, each with their own set of parameters. All server packets contain the same call_id that was set in the initial request made by the client. The section also explains the different types of server errors that can occur, such as NOT_FOUND or INTERNAL, and how the client should handle them.
packet type |
description |
---|---|
RESPONSE |
The RPC is complete - channel_id
- service_id
- method_id
- status
- payload
(unary & client streaming only)
- call_id (if set in REQUEST)
|
SERVER_STREAM |
Message in a server stream - channel_id
- service_id
- method_id
- payload
- call_id (if set in REQUEST)
|
SERVER_ERROR |
Received unexpected packet - channel_id
- service_id (if relevant)
- method_id (if relevant)
- status
- call_id (if set in REQUEST)
|
All server packets contain the same call_id
that was set in the initial
request made by the client, if any.
Server errors
The server sends SERVER_ERROR
packets when it receives a packet it cannot
process. The client should abort any RPC for which it receives an error. The
status field indicates the type of error.
NOT_FOUND
– The requested service or method does not exist.FAILED_PRECONDITION
– A client stream or cancel packet was sent for an RPC that is not pending.INVALID_ARGUMENT
– The client sent a packet type to an RPC that does not support it (aCLIENT_STREAM
was sent to an RPC with no client stream).RESOURCE_EXHAUSTED
– The request came on a new channel, but a channel could not be allocated for it.ABORTED
– The RPC was aborted due its channel being closed.INTERNAL
– The server was unable to respond to an RPC due to an unrecoverable internal error.UNAVAILABLE
– Received a packet for an unknown channel.
Inovking a service method#
Pigweed AI summary: This section describes the protocol for calling service methods of each type: unary, server streaming, client streaming, and bidirectional streaming. The basic flow for all RPC invocations involves the client sending a REQUEST packet, followed by the server sending any number of SERVER_STREAM packets (for server streaming RPCs) or a RESPONSE packet (for unary and client streaming RPCs). The client may cancel an ongoing RPC at any time by sending a CLIENT_ERROR packet with status CANCELLED, and the server may finish
Calling an RPC requires a specific sequence of packets. This section describes the protocol for calling service methods of each type: unary, server streaming, client streaming, and bidirectional streaming.
The basic flow for all RPC invocations is as follows:
Client sends a
REQUEST
packet. Includes a payload for unary & server streaming RPCs.For client and bidirectional streaming RPCs, the client may send any number of
CLIENT_STREAM
packets with payloads.For server and bidirectional streaming RPCs, the server may send any number of
SERVER_STREAM
packets.The server sends a
RESPONSE
packet. Includes a payload for unary & client streaming RPCs. The RPC is complete.
The client may cancel an ongoing RPC at any time by sending a CLIENT_ERROR
packet with status CANCELLED
. The server may finish an ongoing RPC at any
time by sending the RESPONSE
packet.
Unary RPC#
Pigweed AI summary: A unary RPC involves the client sending a single request and the server responding with a single response. The client can cancel the RPC by sending a CLIENT_ERROR packet with status CANCELLED, but the server may not respond if the RPC has already been processed synchronously.
In a unary RPC, the client sends a single request and the server sends a single response.
The client may attempt to cancel a unary RPC by sending a CLIENT_ERROR
packet with status CANCELLED
. The server sends no response to a cancelled
RPC. If the server processes the unary RPC synchronously (the handling thread
sends the response), it may not be possible to cancel the RPC.
Server streaming RPC#
Pigweed AI summary: A server streaming RPC involves the client sending a single request and the server responding with any number of SERVER_STREAM packets followed by a RESPONSE packet. The client can terminate the RPC by sending a CLIENT_STREAM packet with status CANCELLED, to which the server sends no response.
In a server streaming RPC, the client sends a single request and the server
sends any number of SERVER_STREAM
packets followed by a RESPONSE
packet.
The client may terminate a server streaming RPC by sending a CLIENT_STREAM
packet with status CANCELLED
. The server sends no response.
Client streaming RPC#
Pigweed AI summary: A client streaming RPC is a type of RPC where the client sends a REQUEST packet with no payload, followed by any number of messages in CLIENT_STREAM packets, and ends with a CLIENT_REQUEST_COMPLETION packet. The server responds with a single RESPONSE packet to finish the RPC. The server can end the RPC at any time with a RESPONSE packet, and the client can terminate the RPC at any time with a CLIENT_ERROR packet with status CANCELLED.
In a client streaming RPC, the client starts the RPC by sending a REQUEST
packet with no payload. It then sends any number of messages in
CLIENT_STREAM
packets, followed by a CLIENT_REQUEST_COMPLETION
. The server sends
a single RESPONSE
to finish the RPC.
The server may finish the RPC at any time by sending its RESPONSE
packet,
even if it has not yet received the CLIENT_REQUEST_COMPLETION
packet. The client may
terminate the RPC at any time by sending a CLIENT_ERROR
packet with status
CANCELLED
.
Bidirectional streaming RPC#
Pigweed AI summary: Bidirectional streaming RPC allows the client to send multiple requests and the server to send multiple responses. The client initiates the RPC by sending a REQUEST packet and ends it by sending a CLIENT_REQUEST_COMPLETION packet. The server can end the RPC by sending a RESPONSE packet at any time. The client can terminate the RPC by sending a CLIENT_ERROR packet with status CANCELLED.
In a bidirectional streaming RPC, the client sends any number of requests and
the server sends any number of responses. The client invokes the RPC by sending
a REQUEST
with no payload. It sends a CLIENT_REQUEST_COMPLETION
packet when it
has finished sending requests. The server sends a RESPONSE
packet to finish
the RPC.
The server may finish the RPC at any time by sending the RESPONSE
packet,
even if it has not received the CLIENT_REQUEST_COMPLETION
packet. The client may
terminate the RPC at any time by sending a CLIENT_ERROR
packet with status
CANCELLED
.
C++ API#
RPC server#
Pigweed AI summary: The paragraph summarizes the content as follows: The RPC server is implemented using the `rpc::Server` class, which allows services to be registered with it. The size report provides information about the memory usage of the core RPC server, including the memory usage of various functions and components. The RPC server relies on the `Method` class to handle user-defined RPC functions. The packet flow for requests and responses is also explained.
Declare an instance of rpc::Server
and register services with it.
TODO
Document the public interface
Size report#
Pigweed AI summary: The size report provides information on the memory usage of the core RPC server. It uses a basic transport interface with a single channel that directly reads from and writes to "pw_sys_io". The transport has a 128-byte packet buffer, which accounts for most of the RAM usage in the example. However, this transport is not suitable for a real product as it lacks additional overhead proportional to the complexity of the transport. The report also includes a table that shows the memory usage of various segments in the server
The following size report showcases the memory usage of the core RPC server. It
is configured with a single channel using a basic transport interface that
directly reads from and writes to pw_sys_io
. The transport has a 128-byte
packet buffer, which comprises the plurality of the example’s RAM usage. This is
not a suitable transport for an actual product; a real implementation would have
additional overhead proportional to the complexity of the transport.
Label |
Segment |
Delta |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Server by itself |
FLASH
|
+4,068 |
RPC server implementation#
Pigweed AI summary: This document describes the implementation of the RPC server, which relies on the Method class to bridge the gap between the pw_rpc server library and user-defined RPC functions. Method implementations store metadata about each method and provide functions for creating each type of method. The document also outlines the packet flow for requests and responses, with generated services and user-defined RPCs interacting with the internal Method class and the server.
The Method class#
Pigweed AI summary: The RPC Server relies on the Method class to connect the pw_rpc server library and user-defined RPC functions. Each protobuf implementation extends Method to handle request and response proto. Method implementations store metadata and function pointers to user-defined methods. They must satisfy the MethodImplTester test class. More details about Method can be found in pw_rpc/internal/method.h.
The RPC Server depends on the pw::rpc::internal::Method
class. Method
serves as the bridge between the pw_rpc
server library and the user-defined
RPC functions. Each supported protobuf implementation extends Method
to
implement its request and response proto handling. The pw_rpc
server
calls into the Method
implementation through the base class’s Invoke
function.
Method
implementations store metadata about each method, including a
function pointer to the user-defined method implementation. They also provide
static constexpr
functions for creating each type of method. Method
implementations must satisfy the MethodImplTester
test class in
pw_rpc/internal/method_impl_tester.h
.
See pw_rpc/internal/method.h
for more details about Method
.
Packet flow#
Pigweed AI summary: This section describes the packet flow in the pw_rpc library, specifically for requests and responses. The flowcharts show how packets are sent from the client to the server, and how responses are sent back to the client. The library includes both user-defined RPCs and generated services. Internal methods are also used in the packet flow.
Requests#
Pigweed AI summary: This section discusses requests and packet flow in the pw_rpc library. The flowchart shows that packets are sent to the server, which then communicates with the service and internal methods. The library also supports user-defined RPCs and generated services.
Responses#
Pigweed AI summary: This section contains a flowchart diagram illustrating the flow of request packets in the pw_rpc library. The diagram shows the interaction between generated services, user-defined RPCs, the server, and the channel. The flowchart also includes packets and internal methods.
RPC client#
Pigweed AI summary: The RPC client sends requests to a server and manages the contexts of ongoing RPCs. To set up a client, the pw::rpc::Client class is instantiated with a list of channels that it uses to communicate. To send incoming RPC packets from the transport layer to be processed by a client, the client's ProcessPacket function is called with the packet data. When making RPC calls, a service client class is generated from a .proto file for each selected protobuf library, which is then used to send
The RPC client is used to send requests to a server and manages the contexts of ongoing RPCs.
Setting up a client#
Pigweed AI summary: This section explains how to set up a client using the pw::rpc::Client class and a list of channels for communication. The ProcessPacket function is used to process incoming RPC packets from the transport layer. The code example shows how to create a client with channels and how to call the ProcessRpcPacket function. The PacketMeta class can be used to filter out certain packets or invoke specific client processing from a particular thread or context.
The pw::rpc::Client
class is instantiated with a list of channels that it
uses to communicate. These channels can be shared with a server, but multiple
clients cannot use the same channels.
To send incoming RPC packets from the transport layer to be processed by a
client, the client’s ProcessPacket
function is called with the packet data.
#include "pw_rpc/client.h"
namespace {
pw::rpc::Channel my_channels[] = {
pw::rpc::Channel::Create<1>(&my_channel_output)};
pw::rpc::Client my_client(my_channels);
} // namespace
// Called when the transport layer receives an RPC packet.
void ProcessRpcPacket(ConstByteSpan packet) {
my_client.ProcessPacket(packet);
}
Note that client processing such as callbacks will be invoked within
the body of ProcessPacket
.
If certain packets need to be filtered out, or if certain client processing
needs to be invoked from a specific thread or context, the PacketMeta
class
can be used to determine which service or channel a packet is targeting. After
filtering, ProcessPacket
can be called from the appropriate environment.
Making RPC calls#
Pigweed AI summary: This section explains how to make RPC calls using registered channels instead of directly through the client. A service client class is generated from a .proto file for each selected protobuf library, which is then used to send RPC requests through a given channel. When a call is made, a call object is returned to the caller, which tracks the ongoing RPC call and can be used to manage it. The example provided shows how to call the EchoService using a client and a default channel ID. The returned call is
RPC calls are not made directly through the client, but using one of its
registered channels instead. A service client class is generated from a .proto
file for each selected protobuf library, which is then used to send RPC requests
through a given channel. The API for this depends on the protobuf library;
please refer to the
appropriate documentation. Multiple
service client implementations can exist simulatenously and share the same
Client
class.
When a call is made, a call object is returned to the caller. This object tracks the ongoing RPC call, and can be used to manage it. An RPC call is only active as long as its call object is alive.
Tip
Use std::move
when passing around call objects to keep RPCs alive.
Example#
Pigweed AI summary: This code is an example of how to use the pw_rpc library to make an RPC call to an EchoService using the nanopb protocol. It includes a namespace for the EchoClient, a callback function for the response, and a function to call the EchoService with a message. The code also assigns the returned call to a global variable to keep the RPC call alive until it completes.
#include "pw_rpc/echo_service_nanopb.h"
namespace {
// Generated clients are namespaced with their proto library.
using EchoClient = pw_rpc::nanopb::EchoService::Client;
// RPC channel ID on which to make client calls. RPC calls cannot be made on
// channel 0 (Channel::kUnassignedChannelId).
constexpr uint32_t kDefaultChannelId = 1;
pw::rpc::NanopbUnaryReceiver<pw_rpc_EchoMessage> echo_call;
// Callback invoked when a response is received. This is called synchronously
// from Client::ProcessPacket.
void EchoResponse(const pw_rpc_EchoMessage& response,
pw::Status status) {
if (status.ok()) {
PW_LOG_INFO("Received echo response: %s", response.msg);
} else {
PW_LOG_ERROR("Echo failed with status %d",
static_cast<int>(status.code()));
}
}
} // namespace
void CallEcho(const char* message) {
// Create a client to call the EchoService.
EchoClient echo_client(my_rpc_client, kDefaultChannelId);
pw_rpc_EchoMessage request{};
pw::string::Copy(message, request.msg);
// By assigning the returned call to the global echo_call, the RPC
// call is kept alive until it completes. When a response is received, it
// will be logged by the handler function and the call will complete.
echo_call = echo_client.Echo(request, EchoResponse);
if (!echo_call.active()) {
// The RPC call was not sent. This could occur due to, for example, an
// invalid channel ID. Handle if necessary.
}
}
Call objects#
An RPC call is represented by a call object. Server and client calls use the same base call class in C++, but the public API is different depending on the type of call (see RPC call lifecycle) and whether it is being used by the server or client.
The public call types are as follows:
RPC Type |
Server call |
Client call |
---|---|---|
Unary |
|
|
Server streaming |
|
|
Client streaming |
|
|
Bidirectional streaming |
|
|
Client call API#
Client call objects provide a few common methods.
-
class pw::rpc::ClientCallType#
The
ClientCallType
will be one of the following types:(Raw|Nanopb|Pwpb)UnaryReceiver
for unary(Raw|Nanopb|Pwpb)ClientReader
for server streaming(Raw|Nanopb|Pwpb)ClientWriter
for client streaming(Raw|Nanopb|Pwpb)ClientReaderWriter
for bidirectional streaming
-
bool active() const#
Returns true if the call is active.
-
uint32_t channel_id() const#
Returns the channel ID of this call, which is 0 if the call is inactive.
-
uint32_t id() const#
Returns the call ID, a unique identifier for this call.
-
void Write(RequestType)#
Only available on client and bidirectional streaming calls. Sends a stream request. Returns:
OK
- the request was successfully sentFAILED_PRECONDITION
- the writer is closedINTERNAL
- pw_rpc was unable to encode message; does not apply to raw callsother errors - the
ChannelOutput
failed to send the packet; the error codes are determined by theChannelOutput
implementation
-
pw::Status RequestCompletion()#
Notifies the server that client has requested for call completion. On client and bidirectional streaming calls no further client stream messages will be sent.
-
pw::Status Cancel()#
Cancels this RPC. Closes the call and sends a
CANCELLED
error to the server. Return statuses are the same asWrite()
.
-
void Abandon()#
Closes this RPC locally. Sends a
CLIENT_REQUEST_COMPLETION
, but no cancellation packet. Future packets for this RPC are dropped, and the client sends aFAILED_PRECONDITION
error in response because the call is not active.
-
void set_on_completed(pw::Function<void(ResponseTypeIfUnaryOnly, pw::Status)>)#
Sets the callback that is called when the RPC completes normally. The signature depends on whether the call has a unary or stream response.
Callbacks#
Pigweed AI summary: The C++ call objects allow users to set callbacks that are invoked when RPC events occur. There are several limitations and restrictions that must be observed, including not destroying or moving the call object while the call is still active, and only one thread at a time may execute the on_next callback for a specific service method. If these restrictions are violated, a crash may occur. The RPC endpoint logs a warning when a second thread calls ProcessPacket() with a stream packet before the on_next callback for the previous
The C++ call objects allow users to set callbacks that are invoked when RPC events occur.
Name |
Stream signature |
Non-stream signature |
Server |
Client |
---|---|---|---|---|
|
|
|
✅ |
✅ |
|
n/a |
|
✅ |
✅ |
|
|
|
✅ |
|
|
|
n/a |
✅ ( |
Limitations and restrictions#
Pigweed AI summary: The C++ implementation of RPC callbacks has limitations and restrictions that must be observed. Callbacks must not destroy their call object, move their call object while the call is still active, or never return. Deadlocks or crashes may occur if these restrictions are violated. Additionally, only one thread at a time may execute the on_next callback for a specific service method, and if a second thread calls ProcessPacket() with a stream packet before the on_next callback for the previous packet completes, the second packet will
RPC callbacks are free to perform most actions, including invoking new RPCs or cancelling pending calls. However, the C++ implementation imposes some limitations and restrictions that must be observed.
Destructors & moves wait for callbacks to complete#
Pigweed AI summary: This section discusses the restrictions on callbacks in the context of call objects. Callbacks must not destroy or move their call object while it is still active, and other threads may block until all callbacks complete. Deadlocks or crashes may occur if these restrictions are violated, and the value of PW_RPC_CALLBACK_TIMEOUT_TICKS may affect the outcome. The section provides a warning and a message that may appear in case of a crash, and a link to more details.
Callbacks must not destroy their call object. Attempting to do so will result in deadlock.
Other threads may destroy a call while its callback is running, but that thread will block until all callbacks complete.
Callbacks must not move their call object if it the call is still active. They may move their call object after it has terminated. Callbacks may move a different call into their call object, since moving closes the destination call.
Other threads may move a call object while it has a callback running, but they will block until the callback completes if the call is still active.
Warning
Deadlocks or crashes occur if a callback:
attempts to destroy its call object
attempts to move its call object while the call is still active
never returns
If pw_rpc
a callback violates these restrictions, a crash may occur,
depending on the value of PW_RPC_CALLBACK_TIMEOUT_TICKS
. These
crashes have a message like the following:
A callback for RPC 1:cc0f6de0/31e616ce has not finished after 10000 ticks.
This may indicate that an RPC callback attempted to destroy or move its own
call object, which is not permitted. Fix this condition or change the value of
PW_RPC_CALLBACK_TIMEOUT_TICKS to avoid this crash.
See https://pigweed.dev/pw_rpc#destructors-moves-wait-for-callbacks-to-complete
for details.
Only one thread at a time may execute on_next
#
Pigweed AI summary: This section explains that only one thread can execute the "on_next" callback for a specific service method at a time. If a second thread tries to call "ProcessPacket()" before the previous packet's callback completes, the second packet will be dropped and a warning will be logged. The warning message is provided as an example. To avoid this issue, packets for a particular RPC should be handled on only one thread.
Only one thread may execute the on_next
callback for a specific service
method at a time. If a second thread calls ProcessPacket()
with a stream
packet before the on_next
callback for the previous packet completes, the
second packet will be dropped. The RPC endpoint logs a warning when this occurs.
Example warning for a dropped stream packet:
WRN Received stream packet for 1:cc0f6de0/31e616ce before the callback for
a previous packet completed! This packet will be dropped. This can be
avoided by handling packets for a particular RPC on only one thread.
RPC calls introspection#
Pigweed AI summary: The pw_rpc library provides a header called pw_rpc/method_info.h that allows for obtaining information about generated RPC methods during compile time. Currently, it only provides two types: MethodRequestType and MethodResponseType, which are aliases for the request and response types used for a given RpcMethod. An example is given using a templated Storage type alias and the SpecialService RPC service with the MyMethod method. Note that only nanopb and pw_protobuf have real types for MethodRequestType and MethodResponseType,
pw_rpc
provides pw_rpc/method_info.h
header that allows to obtain
information about the generated RPC method in compile time.
For now it provides only two types: MethodRequestType<RpcMethod>
and
MethodResponseType<RpcMethod>
. They are aliases to the types that are used
as a request and response respectively for the given RpcMethod.
Example#
Pigweed AI summary: This paragraph describes an RPC service called SpecialService with a method called MyMethod. It also introduces a templated Storage type alias and shows how it would instantiate for the SpecialService's MyMethod. The note at the end explains that only nanopb and pw_protobuf have real types for MethodRequestType and MethodResponseType, while Raw has them both set as void. It suggests that any helper or trait that wants to use these types for raw methods should do a custom implementation that copies the bytes under the
We have an RPC service SpecialService
with MyMethod
method:
package some.package;
service SpecialService {
rpc MyMethod(MyMethodRequest) returns (MyMethodResponse) {}
}
We also have a templated Storage type alias:
template <auto kMethod>
using Storage =
std::pair<MethodRequestType<kMethod>, MethodResponseType<kMethod>>;
Storage<some::package::pw_rpc::pwpb::SpecialService::MyMethod>
will
instantiate as:
std::pair<some::package::MyMethodRequest::Message,
some::package::MyMethodResponse::Message>;
Note
Only nanopb and pw_protobuf have real types as
MethodRequestType<RpcMethod>
/MethodResponseType<RpcMethod>
. Raw has
them both set as void
. In reality, they are pw::ConstByteSpan
. Any
helper/trait that wants to use this types for raw methods should do a custom
implementation that copies the bytes under the span instead of copying just
the span.
Client synchronous call wrappers#
Pigweed AI summary: The pw_rpc library provides wrappers that convert the asynchronous client API to a synchronous API. The SynchronousCall functions wrap the asynchronous client RPC call with a timed thread notification and return once a result is known or a timeout has occurred. Only unary methods are supported. The Nanopb and pwpb APIs return a SynchronousCallResult object, which can be queried to determine whether any error scenarios occurred and, if not, access the response. The raw API executes a function when the call completes or
pw_rpc
provides wrappers that convert the asynchronous client API to a synchronous API. The SynchronousCall<RpcMethod>
functions wrap the asynchronous client RPC call with a timed thread notification and returns once a result is known or a timeout has occurred. Only unary methods are supported.
The Nanopb and pwpb APIs return a SynchronousCallResult<Response>
object, which can be queried to determine whether any error scenarios occurred and, if not, access the response. The raw API executes a function when the call completes or returns a pw::Status
if it does not.
SynchronousCall<RpcMethod>
blocks indefinitely, whereas SynchronousCallFor<RpcMethod>
and SynchronousCallUntil<RpcMethod>
block for a given timeout or until a deadline, respectively. All wrappers work with either the standalone static RPC functions or the generated service client member methods.
The following examples use the Nanopb API to make a call that blocks indefinitely. If you’d like to include a timeout for how long the call should block for, use the SynchronousCallFor()
or SynchronousCallUntil()
variants.
pw_rpc_EchoMessage request{.msg = "hello" };
pw::rpc::SynchronousCallResult<pw_rpc_EchoMessage> result =
pw::rpc::SynchronousCall<EchoService::Echo>(rpc_client,
channel_id,
request);
if (result.ok()) {
PW_LOG_INFO("%s", result.response().msg);
}
Additionally, the use of a generated Client
object is supported:
pw_rpc::nanopb::EchoService::Client client(rpc_client, channel_id);
pw_rpc_EchoMessage request{.msg = "hello" };
pw::rpc::SynchronousCallResult<pw_rpc_EchoMessage> result =
pw::rpc::SynchronousCall<EchoService::Echo>(client, request);
if (result.ok()) {
PW_LOG_INFO("%s", result.response().msg);
}
The raw API works similarly to the Nanopb API, but takes a pw::Function
and returns a pw::Status
. If the RPC completes, the pw::Function
is called with the response and returned status, and the SynchronousCall
invocation returns OK
. If the RPC fails, SynchronousCall
returns an error.
pw::Status rpc_status = pw::rpc::SynchronousCall<EchoService::Echo>(
rpc_client, channel_id, encoded_request,
[](pw::ConstByteSpan reply, pw::Status status) {
PW_LOG_INFO("Received %zu bytes with status %s",
reply.size(),
status.str());
});
Note
Use of the SynchronousCall wrappers requires a pw::sync::TimedThreadNotification
backend.
Warning
These wrappers should not be used from any context that cannot be blocked! This method will block the calling thread until the RPC completes, and translate the response into a pw::rpc::SynchronousCallResult
that contains the error type and status or the proto response.
Example#
Pigweed AI summary: The code snippet shows examples of using the SynchronousCall function in C++ to make RPC calls. The function returns a SynchronousCallResult object that can be used to handle different error scenarios. The article also mentions that the SynchronousCallResult is compatible with the PW_TRY family of macros, but this may result in the loss of error type information. The code also includes an example of using SynchronousCallFor with a timeout and retry logic.
#include "pw_rpc/synchronous_call.h"
void InvokeUnaryRpc() {
pw::rpc::Client client;
pw::rpc::Channel channel;
RoomInfoRequest request;
SynchronousCallResult<RoomInfoResponse> result =
SynchronousCall<Chat::GetRoomInformation>(client, channel.id(), request);
if (result.is_rpc_error()) {
ShutdownClient(client);
} else if (result.is_server_error()) {
HandleServerError(result.status());
} else if (result.is_timeout()) {
// SynchronousCall will block indefinitely, so we should never get here.
PW_UNREACHABLE();
}
HandleRoomInformation(std::move(result).response());
}
void AnotherExample() {
pw_rpc::nanopb::Chat::Client chat_client(client, channel);
constexpr auto kTimeout = pw::chrono::SystemClock::for_at_least(500ms);
RoomInfoRequest request;
auto result = SynchronousCallFor<Chat::GetRoomInformation>(
chat_client, request, kTimeout);
if (result.is_timeout()) {
RetryRoomRequest();
} else {
...
}
}
The SynchronousCallResult<Response>
is also compatible with the
PW_TRY
family of macros, but users should be aware that their use
will lose information about the type of error. This should only be used if the
caller will handle all error scenarios the same.
pw::Status SyncRpc() {
const RoomInfoRequest request;
PW_TRY_ASSIGN(const RoomInfoResponse& response,
SynchronousCall<Chat::GetRoomInformation>(client, request));
HandleRoomInformation(response);
return pw::OkStatus();
}
ClientServer#
Pigweed AI summary: The article discusses the need for a device to act as both a server and a client for processing Remote Procedure Calls (RPCs). It explains that setting up both a client and server can be complicated, but Pigweed simplifies this by providing a ClientServer class that wraps an RPC client and server with the same set of channels. The article includes code examples for creating a client and server using Pigweed.
Sometimes, a device needs to both process RPCs as a server, as well as making calls to another device as a client. To do this, both a client and server must be set up, and incoming packets must be sent to both of them.
Pigweed simplifies this setup by providing a ClientServer
class which wraps
an RPC client and server with the same set of channels.
pw::rpc::Channel channels[] = {
pw::rpc::Channel::Create<1>(&channel_output)};
// Creates both a client and a server.
pw::rpc::ClientServer client_server(channels);
void ProcessRpcData(pw::ConstByteSpan packet) {
// Calls into both the client and the server, sending the packet to the
// appropriate one.
client_server.ProcessPacket(packet);
}
Testing#
pw_rpc
provides utilities for unit testing RPC services and client calls.
Client unit testing in C++#
Pigweed AI summary: The paragraph discusses how to perform client unit testing in C++. It explains that the "pw_rpc" library supports testing RPC clients by providing tools for invoking RPCs, simulating server responses, and checking the packets sent by the client. The library supports different interfaces such as Raw, Nanopb, and Pwpb. To test synchronous code that invokes RPCs, developers can declare a test context object that provides a preconfigured RPC client, channel, server fake, and buffer for encoding packets.
pw_rpc
supports invoking RPCs, simulating server responses, and checking
what packets are sent by an RPC client in tests. Raw, Nanopb and Pwpb interfaces
are supported. Code that uses the raw API may be tested with the raw test
helpers, and vice versa. The Nanopb and Pwpb APIs also provides a test helper
with a real client-server pair that supports testing of asynchronous messaging.
To test synchronous code that invokes RPCs, declare a RawClientTestContext
,
PwpbClientTestContext
, or NanopbClientTestContext
. These test context
objects provide a preconfigured RPC client, channel, server fake, and buffer for
encoding packets.
These test classes are defined in pw_rpc/raw/client_testing.h
,
pw_rpc/pwpb/client_testing.h
, or pw_rpc/nanopb/client_testing.h
.
Use the context’s client()
and channel()
to invoke RPCs. Use the
context’s server()
to simulate responses. To verify that the client sent the
expected data, use the context’s output()
, which is a FakeChannelOutput
.
For example, the following tests a class that invokes an RPC. It checks that the expected data was sent and then simulates a response from the server.
#include "pw_rpc/raw/client_testing.h"
class ClientUnderTest {
public:
// To support injecting an RPC client for testing, classes that make RPC
// calls should take an RPC client and channel ID or an RPC service client
// (e.g. pw_rpc::raw::MyService::Client).
ClientUnderTest(pw::rpc::Client& client, uint32_t channel_id);
void DoSomethingThatInvokesAnRpc();
bool SetToTrueWhenRpcCompletes();
};
TEST(TestAThing, InvokesRpcAndHandlesResponse) {
RawClientTestContext context;
ClientUnderTest thing(context.client(), context.channel().id());
// Execute the code that invokes the MyService.TheMethod RPC.
things.DoSomethingThatInvokesAnRpc();
// Find and verify the payloads sent for the MyService.TheMethod RPC.
auto msgs = context.output().payloads<pw_rpc::raw::MyService::TheMethod>();
ASSERT_EQ(msgs.size(), 1u);
VerifyThatTheExpectedMessageWasSent(msgs.back());
// Send the response packet from the server and verify that the class reacts
// accordingly.
EXPECT_FALSE(thing.SetToTrueWhenRpcCompletes());
context_.server().SendResponse<pw_rpc::raw::MyService::TheMethod>(
final_message, OkStatus());
EXPECT_TRUE(thing.SetToTrueWhenRpcCompletes());
}
To test client code that uses asynchronous responses, encapsulates multiple
rpc calls to one or more services, or uses a custom service implemenation,
declare a NanopbClientServerTestContextThreaded
or
PwpbClientServerTestContextThreaded
. These test object are defined in
pw_rpc/nanopb/client_server_testing_threaded.h
and
pw_rpc/pwpb/client_server_testing_threaded.h
.
Use the context’s server()
to register a Service
implementation, and
client()
and channel()
to invoke RPCs. Create a Thread
using the
context as a ThreadCore
to have it asycronously forward request/responses or
call ForwardNewPackets
to synchronously process all messages. To verify that
the client/server sent the expected data, use the context’s
request(uint32_t index)
and response(uint32_t index)
to retrieve the
ordered messages.
For example, the following tests a class that invokes an RPC and blocks till a response is received. It verifies that expected data was both sent and received.
#include "my_library_protos/my_service.rpc.pb.h"
#include "pw_rpc/nanopb/client_server_testing_threaded.h"
#include "pw_thread_stl/options.h"
class ClientUnderTest {
public:
// To support injecting an RPC client for testing, classes that make RPC
// calls should take an RPC client and channel ID or an RPC service client
// (e.g. pw_rpc::raw::MyService::Client).
ClientUnderTest(pw::rpc::Client& client, uint32_t channel_id);
Status BlockOnResponse(uint32_t value);
};
class TestService final : public MyService<TestService> {
public:
Status TheMethod(const pw_rpc_test_TheMethod& request,
pw_rpc_test_TheMethod& response) {
response.value = request.integer + 1;
return pw::OkStatus();
}
};
TEST(TestServiceTest, ReceivesUnaryRpcReponse) {
NanopbClientServerTestContextThreaded<> ctx(pw::thread::stl::Options{});
TestService service;
ctx.server().RegisterService(service);
ClientUnderTest client(ctx.client(), ctx.channel().id());
// Execute the code that invokes the MyService.TheMethod RPC.
constexpr uint32_t value = 1;
const auto result = client.BlockOnResponse(value);
const auto request = ctx.request<MyService::TheMethod>(0);
const auto response = ctx.resonse<MyService::TheMethod>(0);
// Verify content of messages
EXPECT_EQ(result, pw::OkStatus());
EXPECT_EQ(request.value, value);
EXPECT_EQ(response.value, value + 1);
}
Use the context’s
response(uint32_t index, Response<kMethod>& response)
to decode messages
into a provided response object. You would use this version if decoder callbacks
are needed to fully decode a message. For instance if it uses repeated
fields.
TestResponse::Message response{};
response.repeated_field.SetDecoder(
[&values](TestResponse::StreamDecoder& decoder) {
return decoder.ReadRepeatedField(values);
});
ctx.response<test::GeneratedService::TestAnotherUnaryRpc>(0, response);
Synchronous versions of these test contexts also exist that may be used on
non-threaded systems NanopbClientServerTestContext
and
PwpbClientServerTestContext
. While these do not allow for asynchronous
messaging they support the use of service implemenations and use a similar
syntax. When these are used .ForwardNewPackets()
should be called after each
rpc call to trigger sending of queued messages.
For example, the following tests a class that invokes an RPC that is responded to with a test service implemenation.
#include "my_library_protos/my_service.rpc.pb.h"
#include "pw_rpc/nanopb/client_server_testing.h"
class ClientUnderTest {
public:
ClientUnderTest(pw::rpc::Client& client, uint32_t channel_id);
Status SendRpcCall(uint32_t value);
};
class TestService final : public MyService<TestService> {
public:
Status TheMethod(const pw_rpc_test_TheMethod& request,
pw_rpc_test_TheMethod& response) {
response.value = request.integer + 1;
return pw::OkStatus();
}
};
TEST(TestServiceTest, ReceivesUnaryRpcResponse) {
NanopbClientServerTestContext<> ctx();
TestService service;
ctx.server().RegisterService(service);
ClientUnderTest client(ctx.client(), ctx.channel().id());
// Execute the code that invokes the MyService.TheMethod RPC.
constexpr uint32_t value = 1;
const auto result = client.SendRpcCall(value);
// Needed after ever RPC call to trigger forward of packets
ctx.ForwardNewPackets();
const auto request = ctx.request<MyService::TheMethod>(0);
const auto response = ctx.response<MyService::TheMethod>(0);
// Verify content of messages
EXPECT_EQ(result, pw::OkStatus());
EXPECT_EQ(request.value, value);
EXPECT_EQ(response.value, value + 1);
}
Custom packet processing for ClientServerTestContext#
Pigweed AI summary: This section discusses optional constructor arguments for nanopb/pwpb *ClientServerTestContext and *ClientServerTestContextThreaded that allow for customized packet processing. By default, only the ProcessPacket() call on the ClientServer instance is done. However, separate client and server processors can be passed to context constructors for cases when additional instrumentation or offloading to a separate thread is needed. The Server processor will be applied to all packets sent to the server, and the client processor will be applied to all
Optional constructor arguments for nanopb/pwpb *ClientServerTestContext
and
*ClientServerTestContextThreaded
allow allow customized packet processing.
By default the only thing is done is ProcessPacket()
call on the
ClientServer
instance.
For cases when additional instrumentation or offloading to separate thread is
needed, separate client and server processors can be passed to context
constructors. A packet processor is a function that returns pw::Status
and
accepts two arguments: pw::rpc::ClientServer&
and pw::ConstByteSpan
.
Default packet processing is equivalent to the next processor:
[](ClientServer& client_server, pw::ConstByteSpan packet) -> pw::Status {
return client_server.ProcessPacket(packet);
};
The Server processor will be applied to all packets sent to the server (i.e. requests) and client processor will be applied to all packets sent to the client (i.e. responses).
Note
The packet processor MUST call ClientServer::ProcessPacket()
method.
Otherwise the packet won’t be processed.
Note
If the packet processor offloads processing to the separate thread, it MUST
copy the packet
. After the packet processor returns, the underlying array
can go out of scope or be reused for other purposes.
SendResponseIfCalled() helper#
Pigweed AI summary: The SendResponseIfCalled() function waits for a specified method call on *ClientTestContext* output and responds to it. It has a default timeout of 100ms and is supported by the pw_rpc/test_helpers.h library. The code example shows how to use it with the MyService and OtherService clients.
SendResponseIfCalled()
function waits on *ClientTestContext*
output to
have a call for the specified method and then responses to it. It supports
timeout for the waiting part (default timeout is 100ms).
#include "pw_rpc/test_helpers.h"
pw::rpc::PwpbClientTestContext client_context;
other::pw_rpc::pwpb::OtherService::Client other_service_client(
client_context.client(), client_context.channel().id());
PW_PWPB_TEST_METHOD_CONTEXT(MyService, GetData)
context(other_service_client);
context.call({});
ASSERT_OK(pw::rpc::test::SendResponseIfCalled<
other::pw_rpc::pwpb::OtherService::GetPart>(
client_context, {.value = 42}));
// At this point MyService::GetData handler received the GetPartResponse.
Integration testing with pw_rpc
#
pw_rpc
provides utilities to simplify writing integration tests for systems
that communicate with pw_rpc
. The integration test utitilies set up a socket
to use for IPC between an RPC server and client process.
The server binary uses the system RPC server facade defined
pw_rpc_system_server/rpc_server.h
. The client binary uses the functions
defined in pw_rpc/integration_testing.h
:
-
constexpr uint32_t kChannelId#
The RPC channel for integration test RPCs.
Module Configuration Options#
The following configurations can be adjusted via compile-time configuration of this module, see the module documentation for more details.
Defines
-
PW_RPC_COMPLETION_REQUEST_CALLBACK#
pw_rpc clients may request call completion by sending
CLIENT_REQUEST_COMPLETION
packet. For client streaming or bi-direction RPCs, this also indicates that the client is done sending requests. While this can be useful in some circumstances, it is often not necessary.This option controls whether or not include a callback that is called when the client stream requests for completion. The callback is included in all ServerReader/Writer objects as a
pw::Function
, so may have a significant cost.This is disabled by default.
-
PW_RPC_NANOPB_STRUCT_MIN_BUFFER_SIZE#
The Nanopb-based pw_rpc implementation allocates memory to use for Nanopb structs for the request and response protobufs. The template function that allocates these structs rounds struct sizes up to this value so that different structs can be allocated with the same function. Structs with sizes larger than this value cause an extra function to be created, which slightly increases code size.
Ideally, this value will be set to the size of the largest Nanopb struct used as an RPC request or response. The buffer can be stack or globally allocated (see
PW_RPC_NANOPB_STRUCT_BUFFER_STACK_ALLOCATE
).This defaults to 64 bytes.
-
PW_RPC_USE_GLOBAL_MUTEX#
Enable global synchronization for RPC calls. If this is set, a backend must be configured for pw_sync:mutex.
This is enabled by default.
-
PW_RPC_YIELD_MODE#
pw_rpc must yield the current thread when waiting for a callback to complete in a different thread. PW_RPC_YIELD_MODE determines how to yield. There are three supported settings:
PW_RPC_YIELD_MODE_BUSY_LOOP
- Do nothing. Release and reacquire the RPC lock in a busy loop.PW_RPC_USE_GLOBAL_MUTEX
must be 0.PW_RPC_YIELD_MODE_SLEEP
- Yield with 1-tick calls topw::this_thread::sleep_for()
. A backend must be configured for pw_thread:sleep.PW_RPC_YIELD_MODE_YIELD
- Yield withpw::this_thread::yield()
. A backend must be configured for pw_thread:yield. IMPORTANT: On some platforms,pw::this_thread::yield()
does not yield to lower priority tasks and should not be used here.
-
PW_RPC_YIELD_MODE_BUSY_LOOP#
-
PW_RPC_YIELD_MODE_SLEEP#
-
PW_RPC_YIELD_MODE_YIELD#
Supported configuration values for
PW_RPC_YIELD_MODE
.
-
PW_RPC_YIELD_SLEEP_DURATION#
If
PW_RPC_YIELD_MODE == PW_RPC_YIELD_MODE_SLEEP
,PW_RPC_YIELD_SLEEP_DURATION
sets how long to sleep during each iteration of the yield loop. The value must be a constant expression that converts to apw::chrono::SystemClock::duration
.
-
PW_RPC_CALLBACK_TIMEOUT_TICKS#
pw_rpc call objects wait for their callbacks to complete before they are moved or destoyed. Deadlocks occur if a callback:
attempts to destroy its call object,
attempts to move its call object while the call is still active, or
never returns.
If
PW_RPC_CALLBACK_TIMEOUT_TICKS
is greater than 0, thenPW_CRASH
is invoked if a thread waits for an RPC callback to complete for more than the specified tick count.A “tick” in this context is one iteration of a loop that yields releases the RPC lock and yields the thread according to
PW_RPC_YIELD_MODE
. By default, the thread yields with a 1-tick call topw::this_thread::sleep_for()
.
-
PW_RPC_DYNAMIC_ALLOCATION#
Whether pw_rpc should use dynamic memory allocation internally. If enabled, pw_rpc dynamically allocates channels and its encoding buffer. RPC users may use dynamic allocation independently of this option (e.g. to allocate pw_rpc call objects).
The semantics for allocating and initializing channels change depending on this option. If dynamic allocation is disabled, pw_rpc endpoints (servers or clients) use an externally-allocated, fixed-size array of channels. That array must include unassigned channels or existing channels must be closed to add new channels.
If dynamic allocation is enabled, an span of channels may be passed to the endpoint at construction, but these channels are only used to initialize its internal channels container. External channel objects are NOT used by the endpoint and cannot be updated if dynamic allocation is enabled. No unassigned channels should be passed to the endpoint; they will be ignored. Any number of channels may be added to the endpoint, without closing existing channels, but adding channels will use more memory.
-
PW_RPC_DYNAMIC_CONTAINER(type)#
If
PW_RPC_DYNAMIC_ALLOCATION
is enabled, this macro must expand to a container capable of storing objects of the provided type. This container will be used internally by pw_rpc to allocate the channels list and encoding buffer. Defaults tostd::vector<type>
, but may be set to any type that supports the followingstd::vector
operations:Default construction
emplace_back()
pop_back()
back()
resize()
clear()
Range-based for loop iteration (
begin()
,end()
)
-
PW_RPC_DYNAMIC_CONTAINER_INCLUDE#
If
PW_RPC_DYNAMIC_ALLOCATION
is enabled, this header file is included in files that usePW_RPC_DYNAMIC_CONTAINER
. Defaults to<vector>
, but may be set in conjunction withPW_RPC_DYNAMIC_CONTAINER
to use a different container type for dynamic allocations in pw_rpc.
-
PW_RPC_ENCODING_BUFFER_SIZE_BYTES#
Size of the global RPC packet encoding buffer in bytes. If dynamic allocation is enabled, this value is only used for test helpers that allocate RPC encoding buffers.
-
PW_RPC_CONFIG_LOG_LEVEL#
The log level to use for this module. Logs below this level are omitted.
-
PW_RPC_CONFIG_LOG_MODULE_NAME#
The log module name to use for this module.
-
PW_RPC_NANOPB_STRUCT_BUFFER_STACK_ALLOCATE#
This option determines whether to allocate the Nanopb structs on the stack or in a global variable. Globally allocated structs are NOT thread safe, but work fine when the RPC server’s ProcessPacket function is only called from one thread.
-
_PW_RPC_NANOPB_STRUCT_STORAGE_CLASS#
Internal macro for declaring the Nanopb struct; do not use.
Zephyr#
Pigweed AI summary: To enable pw_rpc.* for Zephyr, add CONFIG_PIGWEED_RPC=y to the project's configuration. This will enable the Kconfig menu for pw_rpc.server, pw_rpc.client, pw_rpc.client_server, and pw_rpc.common. Each can be enabled via their respective CONFIG_PIGWEED_RPC_* options.
To enable pw_rpc.*
for Zephyr add CONFIG_PIGWEED_RPC=y
to the project’s
configuration. This will enable the Kconfig menu for the following:
pw_rpc.server
which can be enabled viaCONFIG_PIGWEED_RPC_SERVER=y
.pw_rpc.client
which can be enabled viaCONFIG_PIGWEED_RPC_CLIENT=y
.pw_rpc.client_server
which can be enabled viaCONFIG_PIGWEED_RPC_CLIENT_SERVER=y
.pw_rpc.common` which can be enabled via ``CONFIG_PIGWEED_RPC_COMMON=y
.
Encoding and sending packets#
pw_rpc
has to manage interactions among multiple RPC clients, servers,
client calls, and server calls. To safely synchronize these interactions with
minimal overhead, pw_rpc
uses a single, global mutex (when
PW_RPC_USE_GLOBAL_MUTEX
is enabled).
Because pw_rpc
uses a global mutex, it also uses a global buffer to encode
outgoing packets. The size of the buffer is set with
PW_RPC_ENCODING_BUFFER_SIZE_BYTES
, which defaults to 512 B. If dynamic
allocation is enabled, this size does not affect how large RPC messages can be,
but it is still used for sizing buffers in test utilities.
Users of pw_rpc
must implement the pw::rpc::ChannelOutput
interface.
-
class pw::rpc::ChannelOutput#
pw_rpc
endpoints useChannelOutput
instances to send packets. Systems that integrate pw_rpc must use one or moreChannelOutput
instances.-
static constexpr size_t kUnlimited = std::numeric_limits<size_t>::max()#
Value returned from
MaximumTransmissionUnit()
to indicate an unlimited MTU.
-
virtual size_t MaximumTransmissionUnit()#
Returns the size of the largest packet the
ChannelOutput
can send.ChannelOutput
implementations should only override this function if they impose a limit on the MTU. The default implementation returnskUnlimited
, which indicates that there is no MTU limit.
-
virtual pw::Status Send(span<std::byte> packet)#
Sends an encoded RPC packet. Returns OK if further packets may be sent, even if the current packet could not be sent. Returns any other status if the Channel is no longer able to send packets.
The RPC system’s internal lock is held while this function is called. Avoid long-running operations, since these will delay any other users of the RPC system.
Danger
No
pw_rpc
APIs may be accessed in this function! Implementations MUST NOT access any RPC endpoints (pw::rpc::Client
,pw::rpc::Server
) or call objects (pw::rpc::ServerReaderWriter
,pw::rpc::ClientReaderWriter
, etc.) inside theSend()
function or any descendent calls. Doing so will result in deadlock! RPC APIs may be used by other threads, just not withinSend()
.The buffer provided in
packet
must NOT be accessed outside of this function. It must be sent immediately or copied elsewhere before the function returns.
-
static constexpr size_t kUnlimited = std::numeric_limits<size_t>::max()#