pw_log_rpc#
An RPC-based logging solution for Pigweed with log filtering and log drops reporting – coming soon!
Warning
This module is under construction and might change in the future.
RPC Logging#
Pigweed AI summary: This paragraph provides a summary of the RPC Logging documentation. It explains how to set up RPC logging for a target device, including setting up RPC and tokenized logging. It also describes how to connect the tokenized logging handler to the MultiSink, create log drains and filters, and flush the log drains in the background. The paragraph also mentions the components involved in logging over RPC, such as the log listener, log service, MultiSink, and RpcLogDrain. It explains the relation to pw
How to Use#
Pigweed AI summary: This text provides a step-by-step guide on how to set up and use tokenized logging with RPC for a target device. The guide includes instructions on setting up RPC, creating a MultiSink instance to buffer log entries, creating log drains and filters, and flushing the log drains in the background. The use of the pw_rpc and pw_log_tokenized libraries is also mentioned.
1. Set up RPC#
Pigweed AI summary: To set up RPC for your target device, use UART and HDLC for framing. Check out the details on how to enable pw_rpc in the pw_rpc documentation.
Set up RPC for your target device. Basic deployments run RPC over a UART, with
HDLC on top for framing. See pw_rpc for details on how to enable
pw_rpc
.
2. Set up tokenized logging (optional)#
Pigweed AI summary: This section explains how to set up the pw_log_tokenized log backend for tokenized logging, which is optional.
Set up the pw_log_tokenized log backend.
3. Connect the tokenized logging handler to the MultiSink#
Pigweed AI summary: To connect the tokenized logging handler to the MultiSink, a MultiSink instance must be created to buffer log entries. The log backend handler, pw_log_tokenized_HandleLog(), should then encode log entries in the log::LogEntry format and add them to the MultiSink.
Create a MultiSink instance to buffer log entries.
Then, make the log backend handler, pw_log_tokenized_HandleLog()
, encode
log entries in the log::LogEntry
format, and add them to the MultiSink
.
4. Create log drains and filters#
Pigweed AI summary: This section outlines the steps to create log drains and filters for an application's RPC service. It suggests creating an RpcLogDrainMap with one RpcLogDrain for each RPC channel used to stream logs, and optionally creating a FilterMap with Filter objects with different IDs. These maps should be provided to the LogService and registered with the application's RPC service. The RpcLogDrainMap provides a convenient way to access and maintain each RpcLogDrain, which should be attached to the Multi
Create an RpcLogDrainMap
with one RpcLogDrain
for each RPC channel used
to stream logs. Optionally, create a FilterMap
with Filter
objects with
different IDs. Provide these map to the LogService
and register the latter
with the application’s RPC service. The RpcLogDrainMap
provides a convenient
way to access and maintain each RpcLogDrain
. Attach each RpcLogDrain
to
the MultiSink
. Optionally, set the RpcLogDrain
callback to decide if a
log should be kept or dropped. This callback can be Filter::ShouldDropLog
.
5. Flush the log drains in the background#
Pigweed AI summary: To ensure that log entries are continuously sent to log listeners, a thread should be created to flush all RpcLogDrains or one thread per drain, depending on the product's requirements. The thread(s) should call RpcLogDrain::Flush() to pull entries from the MultiSink and send them to the listeners. Alternatively, RpcLogDrain::Trickle can be used to control the rate of log entries streamed. A callback can also be set up to notify the thread(s) when a
Depending on the product’s requirements, create a thread to flush all
RpcLogDrain
s or one thread per drain. The thread(s) must continuously call
RpcLogDrain::Flush()
to pull entries from the MultiSink
and send them to
the log listeners. Alternatively, use RpcLogDrain::Trickle
to control the
rate of log entries streamed. Optionally, set up a callback to notify the
thread(s) when a drain is open.
Logging over RPC diagrams#
Pigweed AI summary: This section discusses logging over RPC and provides two diagrams to illustrate the process. The first diagram shows how a log listener requests logs via RPC, and the log service sets up the corresponding RpcLogDrain to start the log stream. The second diagram shows how logs are streamed via RPC to a computer and another log listener, with internal log readers attached to the MultiSink, such as a writer to persistent memory.
Sample RPC logs request#
Pigweed AI summary: A computer requests logs via RPC and the log service sets up the corresponding RpcLogDrain to start the log stream.
The log listener, e.g. a computer, requests logs via RPC. The log service
receives the request and sets up the corresponding RpcLogDrain
to start the
log stream.
Sample logging over RPC#
Pigweed AI summary: This section describes a sample logging system that streams logs via RPC to a computer and another log listener. The system also includes internal log readers, such as MultiSink::Drain, that can be attached to the MultiSink, including a writer to persistent memory. The Mermaid code shows the flow of data from the sources to the log API, log backend, MultiSink, and various log consumers, including the RPC log drain for streaming to a computer or other log listener.
Logs are streamed via RPC to a computer, and to another log listener. There can
also be internal log readers, i.e. MultiSink::Drain
s, attached to the
MultiSink
, such as a writer to persistent memory, for example.
Relation to pw_log and pw_log_tokenized#
Pigweed AI summary: The paragraph discusses the relationship between pw_log_rpc, pw_log, and pw_log_tokenized. It explains that pw_log_rpc is often used in combination with pw_log and pw_log_tokenized. The paragraph also includes a diagram that shows the order of execution after invoking a pw_log macro. Additionally, it mentions the documentation for understanding facades and backends, as well as provides an example of how pw_system implements custom code for pw_log_tokenized backend.
pw_log_rpc
is often used in combination with pw_log
and
pw_log_tokenized
. The diagram below shows the order of execution after
invoking a pw_log
macro.
Components Overview#
Pigweed AI summary: This document provides an overview of the components involved in RPC logging. The LogEntry and LogEntries classes are used to encapsulate log data and send multiple entries in a single transmission. The LogService class provides an RPC service for requesting log streams and configuring filters. RpcLogDrain is responsible for reading from a MultiSink buffer and sending log entries to a listener. RpcLogDrainMap provides a convenient way to access all or a single RpcLogDrain by its RPC channel ID. RpcLog
LogEntry and LogEntries#
Pigweed AI summary: The RPC logging system uses LogEntry to store data such as level, timestamp, and message. LogEntries can hold multiple instances of LogEntry to reduce the number of transmissions needed. The LogEntries also includes an optional field for the first message's sequence ID, which can be used to determine if any logs were dropped during transmission.
RPC logging uses LogEntry
to encapsulate each entry’s data, such as level,
timestamp, and message. LogEntries
can hold multiple instances of
LogEntry
to send more data using fewer transmissions. The LogEntries
has
an optional field for the first message’s sequence ID that corresponds to the
count of each LogEntry
that passes the log filter and is sent. A client can
use this sequence ID and the number of messages in a LogEntries
to figure
out if logs were dropped during transmission.
RPC log service#
Pigweed AI summary: The LogService class is an RPC service that allows for requesting a log stream and configuring log filters over the same interface as RPCs. It requires a RpcLogDrainMap and FilterMap to assign stream writers and retrieve/modify filters. The client can stop streaming logs using Cancel() or RequestCompletion(), but Cancel() may result in dropped logs. To prevent this, enable PW_RPC_REQUEST_COMPLETION_CALLBACK and use RequestCompletion().
The LogService
class is an RPC service that provides a way to request a log
stream sent via RPC and configure log filters. Thus, it helps avoid
using a different protocol for logs and RPCs over the same interface(s).
It requires a RpcLogDrainMap
to assign stream writers and delegate the
log stream flushing to the user’s preferred method, as well as a FilterMap
to retrieve and modify filters. The client may also stop streaming the logs by
calling Cancel()
or RequestCompletion()
using the RawClientReader
interface. Note that Cancel()
may lead to dropped logs. To prevent dropped
logs use RequestCompletion()
and enable PW_RPC_REQUEST_COMPLETION_CALLBACK
e.g. -DPW_RPC_REQUEST_COMPLETION_CALLBACK=1
.
If PW_RPC_REQUEST_COMPLETION_CALLBACK
is not enabled, RequestCompletion()
call will not stop the logging stream.
RpcLogDrain#
Pigweed AI summary: The RpcLogDrain reads from the MultiSink instance that buffers logs and sends the retrieved log entries to the log listener. Each RpcLogDrain is identified by a known RPC channel ID and requires a rpc::RawServerWriter to write the packed multiple log entries. The user must provide a buffer large enough for the largest entry in the MultiSink while also accounting for the interface’s Maximum Transmission Unit (MTU). If the RpcLogDrain finds a drop message count as it reads the Multi
An RpcLogDrain
reads from the MultiSink
instance that buffers logs, then
packs, and sends the retrieved log entries to the log listener. One
RpcLogDrain
is needed for each log listener. An RpcLogDrain
needs a
thread to continuously call Flush()
to maintain the log stream. A thread can
maintain multiple log streams, but it must not be the same thread used by the
RPC server, to avoid blocking it.
Each RpcLogDrain
is identified by a known RPC channel ID and requires a
rpc::RawServerWriter
to write the packed multiple log entries. This writer
is assigned by the LogService::Listen
RPC.
RpcLogDrain
s can also be provided an open RPC writer, to constantly stream
logs without the need to request them. This is useful in cases where the
connection to the client is dropped silently because the log stream can continue
when reconnected without the client requesting logs again if the error handling
is set to kIgnoreWriterErrors
otherwise the writer will be closed.
An RpcLogDrain
must be attached to a MultiSink
containing multiple
log::LogEntry
s. When Flush
is called, the drain acquires the
rpc::RawServerWriter
‘s write buffer, grabs one log::LogEntry
from the
multisink, encodes it into a log::LogEntries
stream, and repeats the process
until the write buffer is full. Then the drain calls
rpc::RawServerWriter::Write
to flush the write buffer and repeats the
process until all the entries in the MultiSink
are read or an error is
found.
The user must provide a buffer large enough for the largest entry in the
MultiSink
while also accounting for the interface’s Maximum Transmission
Unit (MTU). If the RpcLogDrain
finds a drop message count as it reads the
MultiSink
it will insert a message in the stream with the drop message
count in the log proto dropped optional field. The receiving end can display the
count with the logs if desired.
RpcLogDrainMap#
Pigweed AI summary: The RpcLogDrainMap section provides an easy way to access one or all RpcLogDrain by their RPC channel ID.
Provides a convenient way to access all or a single RpcLogDrain
by its RPC
channel ID.
RpcLogDrainThread#
Pigweed AI summary: The RpcLogDrainThread module includes a sample thread that flushes each drain sequentially. The thread can be configured to rate limit logs by introducing a limit to how many logs can be flushed from each sink before a configurable sleep period begins to give the sinks time to handle the flushed logs. Rate limiting log drains is particularly helpful for systems that collect logs to a multisink in bulk when communications aren’t available. The module also includes a convenient way to set up a log stream that is started without the
The module includes a sample thread that flushes each drain sequentially.
RpcLogDrainThread
takes an encoding buffer span at construction.
RpcLogDrainThreadWithBuffer
takes a template parameter for the buffer size,
which must be large enough to fit at least one log entry.
Future work might replace this with enqueueing the flush work on a work queue.
The user can also choose to have different threads flushing individual
RpcLogDrain
s with different priorities.
When creating a RpcLogDrainThread
, the thread can be configured to
rate limit logs by introducing a limit to how many logs can be flushed from
each sink before a configurable sleep period begins to give the sinks time to
handle the flushed logs. For example, if the rate limiting is configured to 2
log bundles per flush with minimum delay of 100ms between flushes, the logging
thread will send at most 20 log bundles per second over each sink. Log bundle
size is dictated by the size of the encode buffer provided to the
RpcLogDrainThread.
Rate limiting is helpful in cases where transient bursts of high volumes of logs
cause transport buffers to saturate. By rate limiting the RPC log drain, the
transport buffers are given time to send data. As long as the average logging
rate is significantly less than the rate limit imposed by the
RpcLogDrainThread
, the logging pipeline should be more resilient high
volume log bursts.
Rate limiting log drains is particularly helpful for systems that collect logs to a multisink in bulk when communications aren’t available (e.g. collecting early boot logs until the logging thread starts). If a very full log buffer is suddenly flushed to the sinks without rate limiting, it’s possible to overwhelm the output buffers if they don’t have sufficient headroom.
Note
Introducing a logging drain rate limit will increase logging latency, but usually not by much. It’s important to tune the rate limit configuration to ensure it doesn’t unnecessarily introduce a logging bottleneck or significantly increase latency.
Calling OpenUnrequestedLogStream()
is a convenient way to set up a log
stream that is started without the need to receive an RCP request for logs.
The RpcLogDrainThread
sets up a callback for each drain, to be notified when
a drain is opened and flushing must resume.
Log Drops#
Pigweed AI summary: This section discusses the possibility of dropped logs and how the module aims to cover all cases and notify the user when logs are dropped. Logs can be dropped due to various reasons such as slow drains, errors in creating or adding a new log entry, or undetected errors in transmitting or receiving log entries. The log stream will contain a message with the number of dropped logs, and the drop count is combined when possible and reported only when an entry that passes any filters is going to be sent.
Unfortunately, logs can be dropped and not reach the destination. This module expects to cover all cases and be able to notify the user of log drops when possible. Logs can be dropped when
They don’t pass a filter. This is the expected behavior, so filtered logs will not be tracked as dropped logs.
The drains are too slow to keep up. In this case, the ring buffer is full of undrained entries; when new logs come in, old entries are dropped. The log stream will contain a
LogEntry
message with the number of dropped logs. E.g.Dropped 15 logs due to slow reader
There is an error creating or adding a new log entry, and the ring buffer is notified that the log had to be dropped. The log stream will contain a
LogEntry
message with the number of dropped logs. E.g.Dropped 15 logs due to slow reader
A log entry is too large for the stack buffer. The log stream will contain an error message with the drop count. Provide a log buffer that fits the largest entry added to the MultiSink to avoid this error. E.g.
Dropped 1 log due to stack buffer too small
A log entry is too large for the outbound buffer. The log stream will contain an error message with the drop count. Provide a log buffer that fits the largest entry added to the MultiSink to avoid this error. E.g.
Dropped 1 log due to outbound buffer too small
There are detected errors transmitting log entries. The log stream will contain a
LogEntry
with an error message and the number of dropped logs the next time the stream is flushed only if the drain’s error handling is set to close the stream on error. E.g.Dropped 10 logs due to writer error
There are undetected errors transmitting or receiving log entries, such as an interface interruption. Clients can calculate the number of logs lost in transit using the sequence ID and number of entries in each stream packet. E.g.
Dropped 50 logs due to transmission error
The drop count is combined when possible, and reported only when an entry, that passes any filters, is going to be sent.
Log Filtering#
Pigweed AI summary: The Log Filtering service allows for the modification and reading of Filters registered in the FilterMap. To use this service, one must set up RPC for the target device, create Filters with their own container for FilterRules, set up the FilterMap with the Filters that can be modified with the FilterService, and register the service with the RPC server. The components of this service include Filter::Rule, Filter, and FilterMap. Filter::Rule contains values that are compared against a log when set, while
A Filter
anywhere in the path of a LogEntry
proto, for example, in the
PW_LOG*
macro implementation, or in an RpcLogDrain
if using RPC logging.
The log filtering service provides read and modify access to the Filter
s
registered in the FilterMap
.
How to Use#
Pigweed AI summary: This text provides instructions on how to use a software tool called pw_rpc. The instructions include setting up RPC for a target device, creating filters with pre-populated rules, setting up a FilterMap and FilterService, and using RPCs to retrieve and modify filter rules.
1. Set up RPC#
Pigweed AI summary: This section instructs the reader to set up RPC for their target device and provides a reference to the pw_rpc documentation for more information.
Set up RPC for your target device. See pw_rpc for details.
2. Create Filter
s#
Pigweed AI summary: To create filters, each filter should have its own container for the filter rules. The container should be as big as the desired number of rules and the rules can be pre-populated.
Provide each Filter
with its own container for the FilterRules
as big as
the number of rules desired. These rules can be pre-poluated.
3. Create a FilterMap
and FilterService
#
Pigweed AI summary: To create a FilterMap and FilterService, set up the FilterMap with the filters that can be modified with the FilterService and then register the service with the RPC server.
Set up the FilterMap
with the filters than can be modified with the
FilterService
. Register the service with the RPC server.
4. Use RPCs to retrieve and modify filter rules#
Pigweed AI summary: This section discusses the use of RPCs to retrieve and modify filter rules.
Components Overview#
Pigweed AI summary: This paragraph describes the components overview of a system, including three sections: Filter::Rule, Filter, and FilterMap. The Filter::Rule section contains a set of values that are compared against a log when set, and all conditions must be met for the rule to be met. The Filter section encapsulates a collection of zero or more Filter::Rule's and has an ID used to modify or retrieve its contents. The FilterMap section provides a convenient way to retrieve register filters by ID.
Filter::Rule#
Pigweed AI summary: The Filter::Rule section contains a set of values that are compared against a log when set. All conditions must be met for the rule to be met. The section includes a list of conditions, such as "action" which drops or keeps the log if the other conditions match, and "level_greater_than_or_equal" which is met when the log level is greater than or equal to a certain value. Other conditions include "any_flags_set", "module_equals", and "thread_equals".
Contains a set of values that are compared against a log when set. All conditions must be met for the rule to be met.
action
: drops or keeps the log if the other conditions match. The rule is ignored when inactive.any_flags_set
: the condition is met if this value is 0 or the log has any of these flags set.level_greater_than_or_equal
: the condition is met when the log level is greater than or equal to this value.module_equals
: the condition is met if this byte array is empty, or the log module equals the contents of this byte array.thread_equals
: the condition is met if this byte array is empty or the log thread equals the contents of this byte array.
Filter#
Pigweed AI summary: The "Filter" section contains a brief explanation of what a "Filter" is, stating that it is a collection of "Filter::Rule"s and has an ID that can be used to modify or retrieve its contents.
Encapsulates a collection of zero or more Filter::Rule
s and has
an ID used to modify or retrieve its contents.
FilterMap#
Pigweed AI summary: FilterMap is a tool that allows for easy access to register filters by their ID.
Provides a convenient way to retrieve register filters by ID.
Logging with filters example#
Pigweed AI summary: This article provides an example of logging with filters in C++. The code shows how to defer log handling to the RpcLogDrainThread to avoid blocking at the log callsite. It includes a sample implementation of a log backend, where logs enter the MultiSink and log drains and filters are set up. Other source files can defer logging by including pw_log/log.h and using the provided APIs, as long as the source set that includes foo/log.cc is set up as the log backend.
The following code shows a sample setup to defer the log handling to the
RpcLogDrainThread
to avoid having the log streaming block at the log
callsite.
main.cc#
Pigweed AI summary: This is a C++ code example that includes several libraries and registers services for remote procedure calls (RPC). It also initializes a system server and starts a detached thread for logging. The main function logs a message and returns 0.
#include "foo/log.h"
#include "pw_log/log.h"
#include "pw_thread/detached_thread.h"
#include "pw_thread_stl/options.h"
namespace {
void RegisterServices() {
pw::rpc::system_server::Server().RegisterService(foo::log::log_service);
pw::rpc::system_server::Server().RegisterService(foo::log::filter_service);
}
} // namespace
int main() {
PW_LOG_INFO("Deferred logging over RPC example");
pw::rpc::system_server::Init();
RegisterServices();
pw::thread::DetachedThread(pw::thread::stl::Options(), foo::log::log_thread);
pw::rpc::system_server::Start();
return 0;
}
foo/log.cc#
Pigweed AI summary: This code is an example of a log backend implementation using the MultiSink and setting up log drains and filters. It includes various libraries and defines constants and arrays for managing logs. Other source files can defer logging by including pw_log/log.h and using the provided APIs, as long as the source set that includes foo/log.cc is set up as the log backend.
Example of a log backend implementation, where logs enter the MultiSink
and
log drains and filters are set up.
#include "foo/log.h"
#include <array>
#include <cstdint>
#include "pw_chrono/system_clock.h"
#include "pw_log/proto_utils.h"
#include "pw_log_rpc/log_filter.h"
#include "pw_log_rpc/log_filter_map.h"
#include "pw_log_rpc/log_filter_service.h"
#include "pw_log_rpc/log_service.h"
#include "pw_log_rpc/rpc_log_drain.h"
#include "pw_log_rpc/rpc_log_drain_map.h"
#include "pw_log_rpc/rpc_log_drain_thread.h"
#include "pw_rpc_system_server/rpc_server.h"
#include "pw_sync/interrupt_spin_lock.h"
#include "pw_sync/lock_annotations.h"
#include "pw_sync/mutex.h"
namespace foo::log {
namespace {
constexpr size_t kLogBufferSize = 5000;
// Tokenized logs are typically 12-24 bytes.
constexpr size_t kMaxMessageSize = 32;
// kMaxLogEntrySize should be less than the MTU of the RPC channel output used
// by the provided server writer.
constexpr size_t kMaxLogEntrySize =
pw::log_rpc::RpcLogDrain::kMinEntrySizeWithoutPayload + kMaxMessageSize;
std::array<std::byte, kLogBufferSize> multisink_buffer;
// To save RAM, share the mutex, since drains will be managed sequentially.
pw::sync::Mutex shared_mutex;
std::array<std::byte, kMaxEntrySize> client1_buffer
PW_GUARDED_BY(shared_mutex);
std::array<std::byte, kMaxEntrySize> client2_buffer
PW_GUARDED_BY(shared_mutex);
std::array<pw::log_rpc::RpcLogDrain, 2> drains = {
pw::log_rpc::RpcLogDrain(
1,
client1_buffer,
shared_mutex,
RpcLogDrain::LogDrainErrorHandling::kIgnoreWriterErrors),
pw::log_rpc::RpcLogDrain(
2,
client2_buffer,
shared_mutex,
RpcLogDrain::LogDrainErrorHandling::kIgnoreWriterErrors),
};
pw::sync::InterruptSpinLock log_encode_lock;
std::array<std::byte, kMaxLogEntrySize> log_encode_buffer
PW_GUARDED_BY(log_encode_lock);
std::array<Filter::Rule, 2> logs_to_host_filter_rules;
std::array<Filter::Rule, 2> logs_to_server_filter_rules{{
{
.action = Filter::Rule::Action::kKeep,
.level_greater_than_or_equal = pw::log::FilterRule::Level::INFO_LEVEL,
},
{
.action = Filter::Rule::Action::kDrop,
},
}};
std::array<Filter, 2> filters{
Filter(pw::as_bytes(pw::span("HOST", 4)), logs_to_host_filter_rules),
Filter(pw::as_bytes(pw::span("WEB", 3)), logs_to_server_filter_rules),
};
pw::log_rpc::FilterMap filter_map(filters);
extern "C" void pw_log_tokenized_HandleLog(
uint32_t metadata, const uint8_t message[], size_t size_bytes) {
int64_t timestamp =
pw::chrono::SystemClock::now().time_since_epoch().count();
std::lock_guard lock(log_encode_lock);
pw::Result<pw::ConstByteSpan> encoded_log_result =
pw::log::EncodeTokenizedLog(
metadata, message, size_bytes, timestamp, log_encode_buffer);
if (!encoded_log_result.ok()) {
GetMultiSink().HandleDropped();
return;
}
GetMultiSink().HandleEntry(encoded_log_result.value());
}
} // namespace
pw::log_rpc::RpcLogDrainMap drain_map(drains);
pw::log_rpc::RpcLogDrainThread log_thread(GetMultiSink(), drain_map);
pw::log_rpc::LogService log_service(drain_map);
pw::log_rpc::FilterService filter_service(filter_map);
pw::multisink::MultiSink& GetMultiSink() {
static pw::multisink::MultiSink multisink(multisink_buffer);
return multisink;
}
} // namespace foo::log
Logging in other source files#
Pigweed AI summary: To delay logging in other source files, they just need to include "pw_log/log.h" and use the appropriate APIs, as long as the source set that includes "foo/log.cc" is configured as the log backend.
To defer logging, other source files must simply include pw_log/log.h
and
use the tocdepth APIs, as long as the source set that includes
foo/log.cc
is setup as the log backend.
pw_log_rpc in Python#
pw_log_rpc
provides client utilities for dealing with RPC logging.
The LogStreamHandler
offers APIs to start a log stream:listen_to_logs
,
to handle RPC stream errors: handle_log_stream_error
, and RPC stream
completed events: handle_log_stream_completed
. It uses a provided
LogStreamDecoder
to delegate log parsing to.
Python API#
pw_log_rpc.rpc_log_stream#
Utils to decode logs.
- class pw_log_rpc.rpc_log_stream.LogStreamHandler(rpcs: Services, decoder: LogStreamDecoder)#
Bases:
object
Handles an RPC Log Stream.
- Parameters:
rpcs – RPC services to request RPC Log Streams.
decoder – LogStreamDecoder
- __init__(rpcs: Services, decoder: LogStreamDecoder) None #
- handle_log_stream_completed(status: Status) None #
Resets the log stream RPC on completed to avoid losing logs.
Override this function to change default behavior.
- handle_log_stream_error(error: Status) None #
Resets the log stream RPC on error to avoid losing logs.
Override this function to change default behavior.
- listen_to_logs() None #
Requests Logs streamed over RPC.
The RPCs remain open until the server cancels or closes them, either with a response or error packet.
- property source_name: str#