How-To Guides¶
How to install Tamarco¶
Tamarco is compatible with Python >= 3.6. Recommended version is Python 3.7.
To install Tamarco, simply run this command in your terminal of choice:
$ pip3 install tamarco
How to setup the logging¶
The profile¶
Two different profiles are allowed:
DEVELOP. The logging level is set to debug.
PRODUCTION. The logging level is set to info.
The profile setting needs to be in capital letters.
system:
logging:
profile: <DEVELOP or PRODUCTION>
Stdout¶
The logging by stdout can be enabled or disabled:
It comes with the
system:
logging:
stdout: true
File handler¶
Write all logs in files with a RotatingFileHandler. It is enabled when the system/logging/file_path exits, saving the logs in the specified location.
system:
logging:
file_path: <file_path>
Logstash¶
Logstash is the log collector used by Tamarco, it collects, processes, enriches and unifies all the logs sent by different components of an infrastructure. Logstash supports multiple choices for the log ingestion, we support three of them simply by activating the corresponding settings:
Logstash UDP handler¶
Send logs to Logstash using a raw UDP socket.
system:
logging:
logstash:
enabled: true
host: 127.0.0.1
port: 5044
fqdn: false
version: 1
Logstash Redis handler¶
Send logs to Logstash using the Redis pubsub pattern.
system:
logging:
redis:
enabled: true
host: 127.0.0.1
port: 6379
password: my_password
ssl: false
Logstash HTTP handler¶
Send logs to Logstash using HTTP requests.
system:
logging:
http:
enabled: true
url: http://127.0.0.1
user:
password:
max_time_seconds: 15
max_records: 100
- The logs are sent in bulk, the max_time_seconds is the maximum time without sending the logs, the max_records configures
the maximum number of logs in a single HTTP request (The first condition triggers the request).
How to setup a metric backend¶
The Microservice class comes by default with the metrics resource, this means that the microservice is going to read the configuration without any explicit code in your microservice.
Prometheus¶
Prometheus, unlike other metric backends, follows a pull-based (over HTTP) architecture at the metric collection. It means that the microservices just have the responsibility of exposing the metrics via an HTTP server and Prometheus collects the metrics requesting them to the microservices.
It is the supported metric backend with a more active development right now.
The metrics resource uses other resource named tamarco_http_report_server, that it is an HTTP server, to expose the application metrics. The metrics always are exposed to the /metrics endpoint. To expose the Prometheus metrics the microservices should be configured as follows:
system:
resources:
metrics:
collect_frequency: 10
handlers:
prometheus:
enabled: true
tamarco_http_report_server:
host: 127.0.0.1
port: 5747
With this configuration, a microservice is going to expose the Prometheus metrics at http://127.0.0.1:5747/metrics.
The collect frequency defines the update period in seconds of the metrics in the HTTP server.
The microservice name is automatically added as metric suffix to the name of the metrics. Example: A summary named http_response_time in a microservice named billing_api is going to be named billing_api_http_response_time in the exposed metrics.
Carbon¶
Only the plaintext protocol sent directly via a TCP socket is supported.
To configure a carbon handler:
system:
resources:
metrics:
handlers:
carbon:
enabled: true
host: 127.0.0.1
port: 2003
collect_frequency: 15
The collect frequency defines the period in seconds where the metrics are collected and sent to carbon.
File¶
It is an extension of the carbon handler, instead of sending the metrics to carbon it just appends the metrics to a file. The format is the following: <metric path> <metric value> <metric timestamp>.
To configure the file handler:
system:
resources:
metrics:
handlers:
file:
enabled: true
path: /tmp/tamarco_metrics
collect_frequency: 15
The collect frequency defines the period in seconds where the metrics are collected and written to a file.
Stdout¶
It is an extension of the carbon handler, instead of sending the metrics to carbon it just writes the metrics in the stdout. The format is the following: <metric path> <metric value> <metric timestamp>.
To configure the file handler:
system:
resources:
metrics:
handlers:
stdout:
enabled: true
collect_frequency: 15
The collect frequency defines the period in seconds where the metrics are collected and written to a file.
How to setup a setting backend¶
There are some ways to set up the settings, etcd is the recommended backend for a centralized configuration. The YML and file and dictionary are useful for development.
etcd¶
etcd is the recommended backend for a centralized configuration. All the configuration of the system can be in etcd, but before being able to read it, we should specify to the microservices how to access an etcd.
The following environment variables need to be properly configured to use etcd:
TAMARCO_ETCD_HOST: Needed to setup the etcd as setting backend.
TAMARCO_ETCD_PORT: Optional variable, by default is 2379.
ETCD_CHECK_KEY: Optional variable, if set the microservice waits until the specified etcd key exits to initialize.
Avoids race conditions between the etcd and microservices initialization. Useful in orchestrators such docker-swarm where dependencies between components cannot be easily specified.
YML file¶
For enable the feature, the following environment variable must be set:
TAMARCO_YML_FILE: Example: ‘settings.yml’. Example of a YML file with the system configuration:
system:
deploy_name: test_tamarco
logging:
profile: DEVELOP
file: false
stdout: true
redis:
enabled: false
host: "127.0.0.1"
port: 7006
password: ''
ssl: false
microservices:
test:
logging:
profile: DEVELOP
file: false
stdout: true
resources:
metrics:
collect_frequency: 15
status:
host: 127.0.0.1
port: 5747
debug: False
amqp:
host: 127.0.0.1
port: 5672
vhost: /
user: microservice
password: 1234
connection_timeout: 10
queues_prefix: "prefix"
Dictionary¶
It is possible to load the configuration from a dictionary:
import asyncio
from sanic.response import text
from tamarco.core.microservice import Microservice, MicroserviceContext, thread
from tamarco.resources.io.http.resource import HTTPClientResource, HTTPServerResource
class HTTPMicroservice(Microservice):
name = 'settings_from_dictionary'
http_server = HTTPServerResource()
def __init__(self):
super().__init__()
self.settings.update_internal({
'system': {
'deploy_name': 'settings_documentation',
'logging': {
'profile': 'PRODUCTION',
},
'resources': {
'http_server': {
'host': '127.0.0.1',
'port': 8080,
'debug': True
}
}
}
})
ms = HTTPMicroservice()
@ms.http_server.app.route('/')
async def index(request):
print('Requested /')
return text('Hello world!')
def main():
ms.run()
if __name__ == '__main__':
main()
How to setup settings for a specific microservice¶
The settings under system.microservice.<microservice_name>.<setting_paths_to_override> overrides the general settings of system.<setting_paths_to_override> in the microservice named <microservice_name>.
In the following example, the microservice dog is going to read the logging profile “DEVELOP” and the other microservices are going to stay in the logging profile “PRODUCTION”:
system:
deploy_name: tamarco_doc
logging:
profile: PRODUCTION
file: false
stdout: true
microservices:
dog:
logging:
profile: DEVELOP
The microservice name is declared when the microservice class is defined:
class MicroserviceExample(Microservice):
name = 'my_microservice_name'
How to setup settings for a resource¶
The resources are designed to automatically load their configuration using the setting resource.
The resources should be defined as an attribute of the microservice class:
class MyMicroservice(Microservice):
name = 'settings_from_dictionary'
recommendation_http_api = HTTPServerResource()
billing_http_api = HTTPServerResource()
def __init__(self):
super().__init__()
self.settings.update_internal({
'system': {
'deploy_name': 'settings_documentation',
'logging': {
'profile': 'PRODUCTION',
},
'resources': {
'recommendation_http_api': {
'host': '127.0.0.1',
'port': 8080,
'debug': True
},
'billing_http_api': {
'host': '127.0.0.1',
'port': 9090,
'debug': False
}
}
}
})
The resources load their configuration based on the name of the attribute used to bind the resource to the microservice. In the example, we have two HTTPServerResource in the same microservice and each one uses a different configuration.
The HTTPServerResource recommendations_api variable is going to find its configuration in the path ‘system.resources.recommendation_api’.
You must be cautious about choosing the name when the instances are created. If several microservices use the same database, the name of the resource instance in the microservice must be the same in all microservices to load the same configuration.
How to use the logging resource¶
Tamarco uses the standard logging library, it only interferes doing an automatic configuration based in the settings.
The microservice comes with a logger ready to use:
import asyncio
from tamarco.core.microservice import Microservice, task
class MyMicroservice(Microservice):
name = 'my_microservice_name'
extra_loggers_names.append("my_extra_logger")
@task
async def periodic_log(self):
logging.getlogger("my_extra_logger").info("Initializing periodic log")
while True:
await asyncio.sleep(1)
self.logger.info("Sleeping 1 second")
if __name__ == "__main__":
ms = MyMicroservice()
ms.run()
Also can configured more loggers adding their names to my_extra_logger list of the Microservice class.
The logger bound to the microservice is the one named as the microservice, so you can get and use the logger whatever you want:
import logging
async def http_handler():
logger = logging.getlogger('my_microservice_name')
logger.info('Handling a HTTP request')
Logging exceptions¶
A very common pattern programming microservices is log exceptions. Tamarco automatically sends the exception tracing to Logstash and print the content by stdout when the exc_info flag is active. Only works with logging lines inside an except statement:
import asyncio
from tamarco.core.microservice import Microservice, task
class MyMicroservice(Microservice):
name = 'my_microservice_name'
@task
async def periodic_exception_log(self):
while True:
try:
raise KeyError
except:
self.logger.warning("Unexpected exception.", exc_info=True)
if __name__ == "__main__":
ms = MyMicroservice()
ms.run()
Adding extra fields and tags¶
The fields extend the logging providing more extra information and the tags allow to filter the logs by this key.
A common pattern is to enrich the logs with some information about the context. For example: with a request identifier the trace can be followed by various microservices.
This fields and tags are automatically sent to Logstash when it is configured.
logger.info("logger line", extra={'tags': {'tag': 'tag_value'}, 'extra_field': 'extra_field_value'})
Default logger fields¶
Automatically some extra fields are added to the logging.
deploy_name: deploy name configured in system/deploy_name, it allows to distinguish logs of different deploys,
for example between staging, develop and production environments. * levelname: log level configured currently in the Microservice. * logger: logger name used when the logger is declared. * service_name: service name declared in the Microservice.
How to use metrics resource¶
All Tamarco meters implement the Flyweight pattern, this means that no matter where you instantiate the meter if two or more meters have the same characteristics they are going to be the same object. You don’t need to be careful about using the same object in multiple places.
Counter¶
A counter is a cumulative metric that represents a single numerical value that only goes up. The counter is reseated when the server restart. A counter can be used to count requests served, events, tasks completed, errors occurred, etc.
cats_counter = Counter('cats', 'animals')
meows_counter = Counter('meows', 'sounds')
jumps_counter = Counter('jumps', 'actions')
class Cat:
def __init__(self):
cats_counter.inc()
# It can work as a decorator, every time a function is called, the counter is increased in one.
@meows_counter
def meow(self):
print('meow')
# Similarly it can be used as a decorator of coroutines.
@jumps_counter
async def jump(self):
print("jump")
Gauge¶
A gauge is a metric that represents a single numerical value. Unlike the counter, it can go down. Gauges are typically used for measured values like temperatures, current memory usage, coroutines, CPU usage, etc. You need to take into account that this kind of data only save the last value when it is reported.
It is used similarly to the counter, a simple example:
ws_connections_metric = Gauge("websocket_connections", "connections")
class WebSocketServer:
@ws_connections_metric
def on_open(self):
...
def on_close(self):
ws_connections_metric.dec()
...
Summary¶
A summary samples observations over sliding windows of time and provides instantaneous insight into their distributions, frequencies, and sums). They are typically used to get feedback about quantities where the distribution of the data is important, as the processing times.
The default quantiles are: [0.5, 0.75, 0.9, 0.95, 0.99].
Timer¶
Gauge and Summary can be used as timers. The timer admits to be used as a context manager and as a decorator:
request_processing_time = Summary("http_requests_processing_time", "time")
@request_processing_time.timeit()
def http_handler(request):
...
import time
my_task_processing_time_gauge = Gauge("my_task_processing_time", "time")
with my_task_processing_time_gauge.timeit()
my_task()
Labels¶
The metrics admit labels to attach additional information in a counter. For example, the status code of an HTTP response can be used as a label to monitoring the amount of failed requests.
A meter with labels:
http_requests_ok = Counter('http_requests', 'requests', labels={'status_code': 200})
def http_request_ping(request):
http_requests_ok.inc()
...
To add a label to an already existent meter: