Dependency Management for C++

Exciting times

After finishing my previous blog I started working on a new article on the subject of performance measurement with micro-benchmarking or perf. Title to be determined ūüėČ

But then I took part in the Standard C++ Foundation Developer survey. And what happend after the survey made me postponing the article and writing the article you are reading now.

You can find a summary of the results on the ISOCPP website. When the summary was published I came across a tweet by Corentin about a post on about dependency management and a tweet by Bryce Lelbach. So I responded. 

So that’s out. Wait, I just created a liability for myself on a difficult subject where everyone has an opinion about. The missing easy dependency manager is bugging me for years already and that is what made me respond to it. Years of frustration got out in this one tweet. But I’m serious, as a community we should solve this. The article of Corentin gives a lot of good starting points. I agree with most of the article.

This part of the standardization process should be studied by SG15 Tooling, chaired by Titus Winters.


Before continuing reading it is advised to read the article by Corentin. I’m not going to repeat him here ;-). It is important that we all understand the difference between a package manager and dependency management. Corentin gives some description of dependency management and to me it is clear that what we need is dependency management and tooling around that. So let’s agree that we talk about dependency management from now on only!

We should not try to build a ‘one and only’ dependency manager for C++. Let’s leave the building to tool builders, just as we do with compilers. The ISO C++ Committee doesn’t build compilers, they build specifications and leave the implementation to compiler builders. In this area the same can be done, except for one part – I will get back to that later. We also shouldn’t try to build a ‘one and only’ build system. We should strive to have the build system builders to implement the dependency specification in their tooling and that is it. Existing build tooling can add the dependency management into their tooling and use the standardized dependency specification. This specification can contain a lot of aspects:

  • Libraries these sources depends upon
  • Compiler flags needed to successfully compile these sources
  • Linker flags needed to successfully link these ‘sources’
  • Architectures on which these sources can run without problems
  • Preferred compilers for these sources
  • Versioning and source information
  • Namespace information
  • Etc.

This all is still specification (we as a community should align with tool builders to arrive at a specification that they can implement). The only aspect I think is important to create and support centrally is a repository in which sources are stored. The central repository will be used by to tools to download the sources. It goes without saying the these sources must comply to the specification. I agree with Corentin that the repository must be high available and easily duplicated. I also agree that organizations can have their own in-house repository. But I disagree that this should be a commercial industry initiative. The security and reliability is super important for the success of the initiative and therefore it must be under control of the central standardization body. This repository must be the ‘single’ source of truth.

Let’s do it

I really think that dependency management is important for the future of C++ and I’m willing to contribute to that. So Titus, when do we start.

C++ Application on CloudFoundry


My employer decided to go with Pivotal CloudFoundry as one of the cloud providers for the company. I attended a launch event for all developers. Pivotal delivered a talk on using the platform and used a bit of Go and a lot of Java in the examples. Interesting, but real performance junkies want to use an application that is compiled to a binary that can run natively on the hardware(which is always virtualized in a cloud environment). At least I want to be able to do that. So I decided to use C++ to create a TimeZoneService that translates a date from a certain time zone into a date in a different time zone. I will show how to deploy this service to CloudFoundry (not only pivotal, but also IBM uses CloudFoundry as platform for their cloud offerings).


The service is rather simple. You send it a JSON with a conversion request and you get a JSON with a converted date or an error back. An example JSON for the request is:

"TimeZoneFrom":"Europe/London", "TimeZoneTo":"Europe/Amsterdam",

You will get a result that looks like the following:

"ConvertedDateTime": "2017-03-20 16:25:00.000000000 CET" 


Now the interesting stuff. I use CLion as a my IDE. CLion uses cmake for the build. I used conan package manager for a while, but I’m still not convinced that conan suits my needs. One of the libraries I use doesn’t have (all)¬†the debug symbols in the library. While I can choose to build the library with debug symbols myself with the help of conan, I’m not able to build the library with my own compiler and linker flags. The end result is that still some of the details are missing in the library. So in the end I didn’t use conan for this project. The libraries used are Poco, Howard Hinnant’s date, Catch2 and Trompeloeil (the latter two for unit testing and mocking).

As I have no saying in what libraries are installed on the CloudFoundry platform, I should be taking care of delivering all my dependencies to the platform. The easiest way is by use static linked libraries. So you get an executable that has all dependencies available. One to the things that made delivering this more complicated is that the Poco libraries have some circular dependencies, so I changed the installation for my vagrant box (see try it yourself) to have a libPocoAlld.a library to link against.

Some research learned that the CloudFoundry platform is ubuntu based, so I know that I have to compile the TimeZoneService for linux/ubuntu to get an executable that is ABI compatible. After this research things started getting easier (as always when you get the understanding of a subject). To get an app deployed to CloudFoundry, you have to use the CF command line tool. For a native compiled binary you have to use the binary-buildpack. It is very important that your app listens to the right port. The CloudFoundry platform uses an environment variable to communicate the port to the application. When the app starts it uses the PORT environment variable for the port to listen to on localhost. It is important to use localhost instead of a dns name, because CloudFoundry decides on which host the app will be deployed. You can therefore never now in advance what hostname to use, but fortunately localhost is always available. To deploy the app on CloudFoundry, you need to execute the following command:

cf push TimeZoneService -c './TimeZoneService' -b

Bummer, the app is not deployed. See the screenshot.

Failed CloudFoundry deployment
Failed CloudFoundry deployment

So I need to recompile with static linking against libgcc and libstdc++ and push the app again to CloudFoundry.

Now the app is deployed. How to access the newly deployed app depends on your CloudFoundry account and settings. You can use curl to request a time zone conversion. The example here is for my trial account.

curl -H "Content-Type: application/json" -X POST -d '{ "TimeZoneFrom":"Europe/London", "TimeZoneTo":"Europe/Amsterdam", "Year":2017, "Month":3, "Day":20, "HourIn24H":15, "Minute":25 }'
curl -H "Content-Type: application/json" -X GET

Try it yourself

When you want to try it yourself, you can go to my github and clone the repository. You will get a vagrant file that contains the installation of an ubuntu version on which you can compile the app. You will need an account for a CloudFoundry cloud provider. I cannot provide you with one, but you can try to get it from pivotal or one of the other providers.

$ git clone
  • Go into the vagrant folder of your just cloned repository and start the virtual machine:
$ vagrant up
  • Login to your vagrant box by issuing the following command:
$ ssh -p 2222 vagrant@localhost
  • Go into the project folder with the following command:
cd /workspace/TimeZoneService/cmake-build-debug
  • Execute CMake and make:
  • ¬†cmake ../
  • make
  • The executables will be in the bin directory. First run the tests:
  • Now you can try to run it on your cloudfoundry.
  • After logging in with cf login, you should set a target org and space cf -o ORG -s SPACE.
  • Go into the bin directory of your project, remove the TimeZoneServiceTest executable (unless you want to deploy that also, which is useless). Now you can deploy with:
cf push TimeZoneService -c './TimeZoneService' -b
  • The command line push informed you were you can find your deployed app. TimeZoneService was deployed at route

A Microserver built with C++


Microservices architectures are becoming more common as a deploymentmodel for componentized systems. I will not go in depth on microservice architectures in this article. Martin Fowler gives a description of the microservice architecture on his website; see the following article about microservices from Martin Fowler and James Lewis.

I will show an implementation of a microservices server. This implementation uses minimal resources. The server will be deployed separately from the services that use this server to provide services to the world.


This server is based on the Poco Project libraries (version 1.6.1); see Poco Project. This server must be compiled with an C++11 compatible compiler. CMake (3.3.2) is used to prepare the makefiles (GNU make 3.81) to compile and link the microserver (and to compile and link the services that use the microserver) . The microserver software is developed on OS X Yosemite (10.10.5) with clang (700.1.81). The microserver has been tested on OS X Yosemite (10.10.5) and Ubuntu 14.04. My continuous integration system is running Jenkins on Ubuntu 14.04.

Running the microserver

After compiling you can start the microserver application. The application’s configuration must be stored in a property file (the file extension for the property file must be .properties). By default the search path for the property file is the directory in which the application executable is stored, if not found at that location then /etc/executable name is searched and after that the /etc directory is searched for the property file. A location can be given by the means of the command line argument ‚Äďc or ‚Äď‚Äďconfig with a full path to the property file. When¬†this command line argument is given, it will have precedence over the default search paths. So with this command line argument the microserver application will not look into the default paths for a configuration file. When the microserver application is started as a daemon, it will change the current working directory to “/” (the root of the filesystem).

Note: Due to the fact that I can only develop and test under MAC OS X and linux, I didn’t make the microserver search for the configuration file in the standard ms-windows¬†locations. Perhaps someone can test and deliver a patch (when needed) to create this functionality on windows.

In the configuration file the services are defined. Services are defined by an url, a library that implements the service and the fully qualified class that acts as the provider. A full path to the implementing library must also be given in the property file. When an URI is not defined in the configuration file the microserver will respond with a default 404 Not Found HTTP error.

The microserver can use lazyloading to load the services libraries. The lazyloading property can be defined in the configuration (properties) file. When not defined in the configuration file the default value will be false, which means no lazyloading. This (first) version of microserver cannot unload libraries. To unload libraries you need to stop and start the microserver.

See below for an example configuration file.

A status message can be requested (by for instance a loadbalancer) to get a 200 OK HTTP message. A response to a status request will contain a JSON with a status 200 (OK) message, this is the HTTP message to indicate that this server is servicing requests. A status request can be requested by requesting the following URI: http://host:port/status. Of course the host is the hostname of the server on which the microserver is deployed. The port is defined in the configuration file. When the port is not defined in the configuration file it falls back to the default port which is 9980.

Creating a library to handle requests

Class diagram
Class diagram

The microserver uses dynamic classloading from the poco libraries. The provider class must inherit from AbstractRequestHandler; as described in the poco documentation all classes loaded by a classloader must have a common base class. A base class is necessary because the microserver needs an interface to access the provider class. For the microserver this means that provider classes must inherit from AbstractRequestHandler. The provider class acts as a factory for classes that handle the actual requests. The classes that handle the actual requests must inherit from HTTPRequestHandler.


Configuration items that (together) define a library and a provider class start with ‘microserver.library’. The following example defines a library with the name ms-hello-world-lib. = /helloWorld = /var/microservices/ = hello_world::MicroServerRequestHandlerProvider

It is advised to use namespaces in the library (isn’t it always) that implements the provider class. I have the preference to reuse the class name MicroServerRequestHandlerProvider for consistency. By using a namespace the actual provider classes can use the same descriptive name and be used next to each other.

The full path must be given to the path configuration item, because the dynamic class loader uses system wide predefined search paths. By using the full path name you can be sure that the dynamic class loader will find your library.

The configuration item that defines the library name (ms-hello-world-lib in the above example) also defines the URI that point to the service(s) which are provided. All URI’s that start with /helloWorld in the above example will be handled by hello_world::MicroServerRequestHandlerProvider. This means for instance that host:port//helloWorld, but also host:port//helloWorld/SaySomething will be handled by the same provider class.


Logging is defined in the configuration file. The default logger is the root logger and all loggers inherit from this logger. The logging framework of the poco project is structured as a logging hierarchy. See the poco project for more information. All microserver classes define their own logger, which inherits from the root logger. The logger name is the class name of the class in which the logger is defined. For example the logger that is defined in the class MicroServer goes by the name MicroServer. When needed you can define different log levels for different classes. See the HelloWorld example for a setup with different log levels for different classes.

The unavoidable HelloWorld example

To show the setup and working of microserver a HelloWorld example is created. To play with the HelloWorld example you can download the microserver and the HelloWorld sources from github and build the microserver and the HelloWorld library with CMake. When you want to see the microserver in action without building it yourself? Go to HelloWorld.



#include "Poco/Net/HTTPRequestHandler.h"
#include "Poco/Logger.h"

using Poco::Net::HTTPRequestHandler;
using Poco::Net::HTTPServerRequest;
using Poco::Net::HTTPServerResponse;
using Poco::Logger;

namespace ms_helloworld {

class MicroServerHelloWorldRequestHandler : public HTTPRequestHandler {

  void handleRequest(HTTPServerRequest &request, HTTPServerResponse &response);

  Logger &l = Logger::get("MicroServerHelloWorldRequestHandler");

} // namespace ms_helloworld


#include "MicroServerHelloWorldRequestHandler.h"
#include "Poco/Net/HTTPServerRequest.h"
#include "Poco/Net/HTTPServerResponse.h"

using Poco::Net::HTTPServerRequest;
using Poco::Net::HTTPResponse;

namespace ms_helloworld {

    MicroServerHelloWorldRequestHandler::MicroServerHelloWorldRequestHandler() { }

    void MicroServerHelloWorldRequestHandler::handleRequest(HTTPServerRequest &request, HTTPServerResponse &response) {
        l.information("Request from " + request.clientAddress().toString());


        std::ostream &ostr = response.send();
        ostr << "<html><head><title>MicroServerRequestHandler powered by POCO C++ Libraries</title>";
        ostr << "<meta http-equiv=\"refresh\" content=\"30\"></head>";
        ostr << "<body>";
        ostr << "Hello World! MicroServerHelloWorldRequestHandler.";
        ostr << "</body></html>";
} // namespace ms_helloworld 



#include "AbstractRequestHandler.h"
#include "Poco/Net/HTTPRequestHandler.h"
#include "Poco/ClassLibrary.h"

using Poco::Net::HTTPRequestHandler;

namespace ms_helloworld {
class MicroServerRequestHandlerProvider : public AbstractRequestHandler {
  HTTPRequestHandler* getRequestHandler(std::string uri);

} //namespace ms_helloworld


#include "MicroServerRequestHandlerProvider.h"
#include "MicroServerHelloWorldRequestHandler.h"

namespace ms_helloworld {

HTTPRequestHandler *
MicroServerRequestHandlerProvider::getRequestHandler(std::string uri) {
  if (uri == "/helloworld") {
    return new MicroServerHelloWorldRequestHandler();
  else {
    return nullptr;

} // namespace ms_helloworld

This is my microserver example. When analyzing the microserver in production, you will find that it is very resource efficient. I used Docker to deploy the example. It was very easy to deploy this server with Docker, I will explain what I did to deploy this microserver with Docker in a future post.
I’m happy to receive your opinions and reviews on the microserver and the c++ source code. Anything is very welcome.

Happy Coding Everyone!