Sample Hyperledger basic network

Clone the project from here: https://github.com/ExcentioSRL/excentio_network

Since I started studying hyperledger i struggled to find a written tutorial that explained to me how to create my network from scratch. All i found was a ton of documentation and the official test network on hyperledger foundation github. Here are the links: https://github.com/hyperledger/fabric https://hyperledger-fabric.readthedocs.io/en/release-2.2/ 

The problem with these is that it’s really easy to lose the compass, so after playing a bit with the testnet i decided to build from scratch my own scripts. I started from this guide on the docs: https://hyperledger-fabric-ca.readthedocs.io/en/latest/operations_guide.html#setup-tls-ca

then i built what you can find at the link related to this article, which is the excentio network.

Let’s start to explain what these scripts are doing:

Tools needed:

  • configtxgen binary
  • fabric ca client binary
  • docker
  • docker-compose

for these tools if you are using macos with m1 chip you can use the binaries that are in the project, otherwise you can use the install-fabric.sh script to download the binaries for your os/arch. The same thing for docker compose files: you can remove platform directive if you are not on mac m1.

CERTIFICATION AUTHORITIES

As you may already know, hyperledger relies on certificates for the authorization and authentication part. In particular, every identity is represented by a membership service provider (MSP), which simply is a folder containing all the data (keys and certificates to sign transactions). Furthermore, other certificates are needed, these are the tls certificates, used to setup a secure communication channel among network actors and also for communication with the network from clients. 

The architecture we chose to follow uses a tls CA that releases tls certificates for every organization, while every org has its own ecert CA, the one responsible for managing MSPs. 

If you are asking yourself why we divided tls ca and ecert ca, it is to follow the “separation of concerns” principle, in this way if one is compromised the other will not, and vice-versa (also it is described as a best practice in hyperledger docs).

TLS CA scripts staging-tls-ca.yml [DOCKER COMPOSE] init-tls-ca.sh [SHELL]

The first file is used to run a new container with a fabric ca image inside, it maps all the internal paths used by fabric into bind volumes mounted on the host machine into specific folders, in this way everything we do in that folders is accessible from inside the docker container.

The second file firstly enrolls the ca admin, which has already been registered with the command directive during the compose phase, then it performs some registering commands that will be later used by other network components to get tls certificates.

Register and enroll are the two phases of msp creation: the first one creates the identity on the ca which is then stored into ca db with his user and password, this is performed by the ca administrator who gives the created credentials to the actor who will use the msp. Later the MSP owner will perform an enroll command to get his own certificates (if it enrolls on a ecert-ca these will be the sign certs otherwise they will be the tls certs).

So what happens at this stage: tls-ca is bootstrapped, the script enrolls tlsca admin and registers msps for every other actor in this network, later everyone of them will run the enroll command.

ecert CA scripts docker-compose-…orgname…-ecert-ca.yml [DOCKER COMPOSE]

 init-…orgname…-ca.sh [SHELL]

every org has these two files, one that brings up the ca, the other that enrolls the ca admin and registers known future users, such as peers and peer-admins.

It is important to notice that every script changes these environment variables on the host machine, which are telling the client binary where to put msp data when it runs an enroll command (only enroll produces msp files, register only creates a record on the internal db)

FABRIC_CA_CLIENT_HOME –> root for every client operation FABRIC_CA_CLIENT_TLS_CERTFILES –> where to find tls files to call the ca FABRIC_CA_CLIENT_MSPDIR –> where to save msp when performing an enroll operation

notice also that when it enrolls on tls it saves tls data into a tls-msp folder while when it enrolls on ecert ca it saves under msp folder.

peer scripts When every CA has been set up, our orgs are ready to enroll peer msps and then it is possible to bootstrap them. In the same way we did for org CAs, we will have a docker compose for every peer of every org along with an init script. Before running a peer, we have to make sure that its crypto material has been produced under the correct folder, which is specified by the content of {FABRIC_CFG_PATH}/{msppath}.

eg: /tmp/hyperledger/datainform/peer1/peer/msp which is mapped into ./datainform-peer1-data/peer/msp

This crypto material is built by the enroll commands of the init script.

Another important thing is to put in the right place the core.yml and config.yml files: these represent peer internal configuration, which is then overriden with docker environment variables. It is important to have these files otherwise peer can’t boot, since we have changed the cfg path into a custom one, where it expects to find all configuration files and msps. i’m not going to explain in detail the properties, you can find everything on hyperledger docs, just be sure they are in the right place and that in config.yml there is a structure like this one:

NodeOUs:
  Enable: true
  ClientOUIdentifier:
    OrganizationalUnitIdentifier: client
  PeerOUIdentifier:
    OrganizationalUnitIdentifier: peer
  AdminOUIdentifier:
    OrganizationalUnitIdentifier: admin
  OrdererOUIdentifier:
    OrganizationalUnitIdentifier: orderer

this defines 4 operational units, that are defined also on msp certificates and let the network know which role the msp user has: admin, client, peer, orderer. OUs will be really important later, when we try to create a new channel: by design fabric has a system policy that lets only org admins edit channel configurations, and it can’t be modified at network boot but only with explicit commands launched from peers, so it is a lot easier to just setup OUs and use them.

where to put config.yml and core.yml

Now you are asking yourself: yes, very nice to know this, but where do i have to put these files?! The answer is here: core.yml needs to be on the side of msp folder, while config.yml will be inside msp folder.

continuing the example, the paths will be these two: ./datainform-peer1-data/peer/msp/config.yml ./datainform-peer1-data/peer/core.yml

in the next step we are

STARTING A PEER:

once everything we explained a moment ago is done, it is possible to run the docker-compose for our org peer, which will try to boot and… fail! this is because i still need to improve the script, that should rename the tls private key into key.pem, so you just need to go at this path datainform-peer1-data/tls-msp/keystore and rename the content into key.pem, then you can re run the peer with this command

docker run <<container_name>>

now it should boot correctly.

SETTING UP THE ORDERING SERVICE

we talked about CA and peer nodes, now it’s time to set up our ordering service, again if you want a complete explanation on what an orderer is and which are the best practices i’m redirecting you to the official docs: https://hyperledger-fabric.readthedocs.io/en/latest/orderer/ordering_service.html

Anyway, you can think about the ordering service as the “miner” in a permissioned fabric network. The difference is that there is no Proof of Work or Proof of Stake, but only consensus algorithms such as PBFT or other fault tolerance algorithms. Basically the orderers are the nodes responsible for checking approval by peers and order transactions, producing then a new block which will be copied into every peer ledger (you may already know that every peer has its own copy of the ledger).

I’m also assuming you already setup an ecert ca for the ordering organization, in our architecture it is defined as Present, which will be the company responsible for managing the ordering service (notice that it is possible to have more than one org running ordering nodes and maintaining ordering service)

So we need at first to register and enroll orderer admin and orderer-peer on the tlsca and ecertca, this is exactly what the first part of the script does.

The name of this init script is init-fabric-orderer-present.sh, but before running it we need to be sure of three things:

  1. all peers msps are enrolled and every peer is able to boot correctly
  2. tls certs are correctly copied under the tls-msp folder of each peer
  3. we have created the genesis block under the ordering volume folder

assuming the first two points are clear (the previous scripts should have done that), we will focus on the third point. As i said we need a genesis block, that will define which are the orgs in the network as well as which are network policies. To create this genesis block we will use configtxgen (last part of the present script), this will need a configtx.yml file and will produce a genesis.block file under the present-orderer1-data folder.

Let’s dig a bit into the configuration:

You’ll see three orgs: Present, Datainform and Excentio.

  • For every org there is an mspID that must be the same one we defined in peer or orderer docker configuration with this variable: CORE_PEER_LOCALMSPID or this one ORDERER_GENERAL_LOCALMSPID
  • MSPDir where configtxgen will look to get msp data when creating the block
  • Name org name
  • Policies: who can do what in this org, for example you can see that to write and read it is enough being an org member, while if you want to run an administration command you should be recognized as an admin

Capabilities: this section is useful for managing compliance among actors running different fabric versions

Orderer: here is defined the orderer type and a lot more information, when we are in a production like environment here we are defining how fast our orderer will work, how many messages at a time it will process and so on, and this will obviously have an impact on the resource usage. Furthermore be aware that the solo type can be used only for development purpose.

Profiles: this is also an important section, when we use configtx the command will look for a profile that picks parts of configuration listed in the previous sections and uses it to create a block, as you can see there are two profiles defined: one that produces a channel configuration and one for the genesis block.

Now you can see understand what the second part of the script does: it creates a genesis block using the TwoOrgsApplicationGenesis profile.

Let’s not waste more time and just run it, you should find msp data, tls-msp data and genesis.block under the orderer volume folder.

We are almost done, the last two things are:

  • copy orderer.yml into present-orderer1-data and copy config.yml into present-orderer1-data/msp folder
  • rename tls-msp/keystore file into key.pem

Now it is possible to run

docker compose docker-compose-orderer1-....yourorg.... up -d

You should see the orderer running and using msps and genesis block provided with the previous steps.

CLI TOOLS

Last but not least you will need to create docker containers with cli tools to interact with your peers and orderers, for example i created datainform and excentio cli docker composes. They’ll need to have msps folders under their volumes, i decided to use admin msps that were enrolled in the previous steps, in this way i’m able to interact with my peers as an admin. Be sure to copy also core.yml and config.yml under their volumes as they need these files to boot correctly. Another file i never mentioned until now is the channel.tx produced by configtxgen during the ordering service setup: if you copy this file under cli volume you’ll be able to run a peer channel creation, that will create the presentchannel.block which will let you join the new channel. All the cli commands used to create the channel and join it with both peers are inside the cli-peer-commands.sh file, that is not meant to be launched, you’ll have to connect by ssh to the docker cli of the peer you want to use and run commands in that terminal.

CONCLUSIONS

This was an overview of the excentio network scripts, i did not explain every part in detail because i don’t want to be too specific and boring. My advice is to take this sources, try to run them and customize everything in a version based on your needs, i hope that this will be a good starting point for people who want to build their own development network. In the next articles, i’ll speak about deploying this network into LXC containers, making the network accessible from other machines so that it’ll serve as a common development for developers who are working on the same project. Feel free to contribute by opening issues and or fork the project as well as tell me what in your opinion should be improved or changed.

Uuups, i almost forgot, there is also a script called build_network.sh that builds the whole network running all the steps by himself, even if it still has some issues.

Author: Emanuele Giuliano