Welcome to OriginTrail documentation!

Company Website: origintrail.io

Introduction

OriginTrail is the first Purpose-built Protocol for Supply Chains Based on Blockchain.

Please keep in mind that the protocol is currently in Testnet (Beta) stage. There are some limitations and work-in-progress. The Main Network is scheduled for Q1 2019. For further information about our roadmap please see our website for more detailed development plans.

The documentation is aimed towards three groups of readers:

  • Node runners
  • Supply chain service providers
  • Software Developers

Setting up the Node

In order to be part of OriginTrail Decentralized Network (ODN), you need to install the Node. For detailed instructions on installation and configuration of an OriginTrail node visit following sections:

Service provider guidelines

Service providers are companies that provide supply chain management solutions, consulting or advisory services to help their clients improve efficiencies, enhance product and consumer safety and brand protection activities to name a few. Please visit Implementation guidelines for more resources.

Development contribution

OriginTrail is an open source project. We happily invite you to join us in our mission of building the protocol to establish more transparency within global supply chains.

For more information please take a look at contribution

Data Layer

The purpose of this document is to explain the data structures of the OriginTrail protocol data layer. OriginTrail is a purpose built protocol for trusted supply chain data sharing, utilizing GS1 standards and based on blockchain.

The following abstractions are based on the observations from several years of working with supply chain transparency solutions. The aim of the abstraction is to be generic enough to support any present and future use cases involving supply chain event visibility and data exchange, while on the other hand being as tailored as possible to provide optimal technical performance to support such cases.

Underlying technology and technical rationale

The OriginTrail Decentralized Network (ODN) Data Layer is intended to provide data interoperability (between different data providers - i.e. supply chain stakeholders), as well as easy interconnectivity between different data sets regardless of their source. This suggests that the very nature of data structures in play is as much about the information itself, as it is about the connections between the data provided from different sources. Network structure consists of incentivised nodes (Network structure).

During the years of building supply chain transparency solutions we have come to the conclusion that the most adequate data structure is a graph, where the connections between data points are “first-class citizens” of the structure. Our previous implementations involved relational databases which have proven to be suboptimal, finally converging towards graph logic emulation within the table structures. A native graph database has shown to be superior in terms of the problems OriginTrail is tackling.

A graph is ideal to represent the chain of custody on every trackable element in the supply chain, its properties over time and its interactions with facilities, transportation devices, companies and people. The structure of supply chain data in the graphs of the data layer is subject of this document and is based on GS1 standards, but not exclusive to other standardization schemes and aims to be standard inclusive.

It is important to note that these graph abstractions are not dependent on the specific implementation of the underlying graph database. The data layer is intended to be “plug-and-play” in this regard, allowing the choice of the underlying database as long as it can support the structures and features needed by the data layer. Introducing data to the Data layer. In this way the system can be extended to support future data formats and providers.

Once the data gets converted into graph form and stored in the database, its fingerprint is stored on the blockchain. This process, as well as the specific details on importer implementations, are out of the scope of this document, but will be subject of further documentation.

Entities in graph structure (ontology)

There are 5 generic entity groups in the structure of OriginTrail graph structure.

  • Objects and ObjectClasses, represented as nodes
  • Events and EventClasses, represented as nodes
  • Connections, represented as edges

Each entity can contain

  1. An identifier block The identifier block contains a set of IDs describing the particular entity. Examples would include identifiers such as GS1 codes (i.e. EAN8, EAN13), QR codes, batch identifiers (LOT or serial numbers) or any type of internal identifiers used by the parties in the supply chain. These allow for easy lookups and traversals. Additionally each graph element has a unique ID generated deterministically by the system.
  2. A data block: The data block contains all non-ID related information about the entity and is extensible.
  3. A meta data block This meta data involves information about the import process, signing keys and other technical details, not related to the meta data about supply chain products.

Objects and ObjectClasses

Objects represent all physical or digital entities involved in events in the supply chain. Examples of objects are vehicles, production facilities, documents, sensors, personnel etc. ObjectClasses specifically define a global set of properties for their child Objects (as their “instances”). In the example of a wine authenticity use case, the data shared among supply chain entities (winery, distributors, retailers etc) involves information about specific batches of bottles with unique characteristics. The master data about a product would present an ObjectClass node in the OT graph, while the specifics about the product batch would be contained within the “batch” Object. This allows for a hierarchical organization of objects, with a simplistic but robust class-like inheritance.

ObjectClasses are divided in:

  • Actors, which encompass companies, people, machines and any other entity that can act or observe objects within the supply chain. (the “Who”)
  • Products (supply chain objects of interest), which represent goods or services that are acted upon by actors (the “What”)
  • Locations, which define either physical or digital locations of products or actors (the “Where”)

Each of the Objects can then be further explained by custom defined subcategories.

Events and EventClasses

Events in the graph structure have a similar inheritance pattern – EventClasses classify types of events which are instantiated as particular Event nodes. OriginTrail currently classifies 4 different event types:

  • Transport events, which explain the physical or digital relocation of objects in the supply chain.
  • Transformation events, which contain information about the transformation of one or more objects into (a new) one. An example would be the case of an electronic device (i.e. mobile phone), where the assembly is observed as a transformation event of combining different components – Objects - into one output Object, or the case of combining a set of SKUs in one group entity such as a transportation pallet. Similarly, a digital transformation event would be any type of processing of a digital product (i.e. mastering of a digital sound recording). This event type corresponds to GS1 AggregationEvents and TransformationEvents.
  • Observation events, which entail any type of observational activity such as temperature tracking via sensors or laboratory tests. This event corresponds to GS1 ObjectEvents that are published by one party (interaction between different business entities is not the primary focus of the event).
  • Ownership/custody transfer events, where the change of ownership or custody of Objects is distinctly explained. An example would be a sale event.

Each of the events can then be further explained by custom defined subcategories and meta data.

Connections

The connections are edges in the graph used to define connections between Objects, ObjectClasses and Events. The connections are classified in 3 groups:

  • Inheritance connections (between ObjectClass and Object vertices, as well as between EventClass and Event vertices). These connections define that an Object is an instance of ObjectClass, the isInstanceOf edge.
  • Involvement connections (between Object and Event vertices) connect objects with events in which they are involved. For example, a transformation event of production would have input objects, output objects, a location where the production took place etc.
  • State connections (between two Object vertices) connect two or more objects that are related in some way. For example, an object can be owned by some supply chain actor.

An example of a graph illustrating all the above mentioned entities is provided below. It illustrates a simplified scenario of a movement of an “engine” object within a supply chain, undergoing a transformation event of wrapping it up in a “package” by Employee Bob from CarEngines LTD, which is transported by an “AirTransport” company and finally sold to “Buyer Alice Corp”.

Graph Example

This very simple example easily illustrates the amount of data connections needed to efficiently explain the undergoing scenario and is simplified to provide an easier logical overview.

Conclusion and future steps

This document outlines the data structure logic behind OriginTrail’s data layer with the intention of providing high flexibility and data interoperability within the network. It is a work in progress and therefore we invite the community to join and provide ideas and feedback on possible improvements and inefficiencies that may arise from such a scheme. The best way to contribute is to use the Improvement Proposals repository provided by the OriginTrail team. Further iterations on the structure will be based on use-cases implemented and observed in the alpha and test net focusing on optimizations and simplifications of the structure.

Network structure

The network layer takes care of the accessibility and data governance of the underlying data layer. It consists of network nodes which all contain parts of the decentralized database and store graphs of the data. Access to the data is achieved through the provided data exchange API.

The peer to peer network is built on a distributed hash table based on Kademlia which is responsible for efficient routing within the network. The messages between peers are signed, while the Kademlia node ID presents a valid Ethereum address which the node is able to spend from. This enforces long-term identity and helps with Kademlia routing and Eclipse attacks.

The peer-to-peer decentralized network operates as a serverless supply chain data storage, validation and serving network with built in fault-tolerance, DDoS resistance, tamper-proof resistance and self-sustaining based on the incentive system explained in this document.

The intention of this paper is to document the research findings and mechanics behind the incentive model of OriginTrail, as well as to attract opinions and feedback from the community and researchers interested in the topic.

Network entities and classification

In order to better understand the OriginTrail P2P network structure and the incentive mechanisms within the protocol we have to understand all different roles within the context of the system.

The main premise is that the different nodes have different interests given their roles. In order to provide fair play on the network and provide a fair market, we have to understand different entities, their aims, needs and relationships. Above all we have to understand possibilities of collusion of different entities and their possible motives and therefore construct incentives in order to mitigate them.

It is important to state that all the nodes are operated by the same software, but rather their function in the context of observed data determines how the nodes are perceived - one node can have different roles within different deals. Below is a list of different entities and their roles in the system.

Data Provider

The data provider (DP) - is an entity that publishes supply chain data to the network. A typical scenario would be a company that would like to publish and share its data from their ERP system about the products that are part of the supply chain. Data providers can also be consumers which are interacting with the network through applications, or devices such as sensors which provide information about significant events in the supply chain.

The interest of the data provider is to be able to safely store the data on the network as well as to be able to connect it and cross-check with the data of other DPs within the network. Depending on the use case, providing the data to the network can be incentivised with the Trace token.

Data Creator Node

The Data Creator node (DC) - is an entity that represents a node which will be responsible for importing the data provided by the DP, making sure that all the criteria of DP are met - such as availability of the data on the network for a desired time and a factor of replication. While we expect typically that Data Providers will run their own Data Creator nodes, it is not a requirement - third party DC nodes may provide the service for one or several Data Providers. The DC node is an entry point of the information to the network and the relationship between the DP and DC is not regulated by the protocol.

The responsibility of the DC node is to negotiate, establish and maintain the service requested by the DP in relationship with its associated Data Holder nodes (DH). Furthermore, DC nodes are responsible to check if data is available on the network during the time of service and initiate the litigation process in case of any disputes.

Data Holder Node

The Data Holder (DH) is a node that has committed itself to store the data provided by a DC node for a requested period of time and make it available for the interested parties (which can also be the DC node). For this service the Data Holder will be compensated in TRAC tokens. The DH node has the responsibility to preserve the data intact in its unaltered, original form, as well as to provide high availability of the data in terms of bandwidth and uptime.

It is important to note that the DH node can be a DC node at the same time, in the context of the data that it has introduced to the network. As noted, the same software runs on all the nodes in the network, providing for symmetrical relations and thus not limiting scalability.

The Data Holder may also wish to find the data not directly delivered by DCs, but that is popular, and offer it to the interested parties. Therefore it is probable that Data Holders will listen to the network, search for data that is frequently requested, and replicate it from other Data Holders to also store them, process and offer them to the Data Viewers. However, since such Data Holders are not bound by the smart contract to provide the service, there is a certain risk that these Data Holders may offer false data or temper the data, or even pretend to have the data that they don’t have.

To mitigate this risk, a node will be required to deposit a stake. This stake will be stored in case it is proven that Data Holder tried to sell the data which is altered while Data Viewers will have a mechanism to check if all the chunks of the data are valid and initiate a litigation procedure in case of any inconsistencies. Furthermore, Data Holders will be able to provide larger stake if they want to demonstrate their quality of the service to the Data Viewers.

Data Viewer

The Data Viewer (DV) is an entity that requests the data from any network node able to provide that data. The Data Viewer will be able to send two types of queries to the network. The first type is a request for a specific set of batch identifiers of the product supply chain they are interested in, where they will be able to retrieve the all connected data of the product trail. The second type are custom queries asking for specific connections between the data. In both cases, the Data Viewer will receive the offers from all the nodes that have the data together with charges for reading and structure of the data that will be sent. The Data Viewer can decide which offers it will accept and deposit the requested compensation funds on the escrow smart contract. The providing node then sends the encrypted data in order for the Data Viewer to test the validity of data. Once the validity of the data is confirmed, the Data Viewer will get the key to decrypt the data while the smart contract will unlock the funds for the party that provided the data.

The interest of the Data Viewer is to get the data as affordable as possible, but also to be sure that the provided data is genuine. Therefore, the Data Viewer will also have an opportunity to initiate the litigation procedure in case that received data is not valid. If that happens, and it is proved that Data Viewer received the false data, the stake of the corresponding DH node is lost.

The complete picture of interaction between participants in OriginTrail system is presented on data diagram (Figure 1).

_images/data_flow@2x.png

Service initiation

To get data onto OriginTrail network, the Data provider sends tokens and data to the chosen DC node. The data creator sends tokens to the smart contract with tailored escrow functionalities and broadcasts a data holding request with the required terms of cooperation. All interested DH node candidates then respond with their requirements by submitting their applications to the smart contract - price of the service per data unit and minimum time of providing the service.

The minimum factor of replication is 2N+1, where the minimum value for N is yet to be determined, while the actual factor may be larger as it is decided by the Data Creator. To mitigate the possibility of fixing the results of the public offering, only when a certain number of Data Holders answer the call, which is greater than the requested replication factor, the smart contract will close the application procedure. Once the application procedure is finished, the smart contract selects the required number of Data Holders so a potential malicious Data Creator who might own several DH nodes can’t influence the process and pick its own nodes.

The Data Creator will deposit the compensations in tokens for the Data Holders on an escrow smart contract that Data Holders will be able to progressively withdraw from as the time passes, and up to the full amount once the period of service is successfully finished. The smart contract will take care that the funds are unlocked incrementally. It is up to the Data Holder to decide how often it will withdraw the funds for the part of the service that is already delivered.

In order to participate in the service, the Data Holder will also have to deposit a stake in the amount proportional to the amount of the job value. This stake is necessary as a measure of security that data will not be deleted or tempered in any way, and that it will be provided to third parties according to the requirements.

Servicing period

Data replication

After the agreement between Data creator and Data holders has been created, the Data holder prepares data by splitting graph vertex data into blocks and calculating a root hash which is then stored on the blockchain. The root hash is stored permanently for everyone to be able to prove the integrity of data. The data is then encrypted using RSA encryption and encryption key appended to it. A Merkle tree is again created for the encrypted data blocks, proving integrity of data that will be sent to Data holder. The root hash of the encrypted data is written to the escrow contract and finally the data can be sent to Data holder. Upon receiving data, the Data holder is verifying that root hash of received data is indeed the one written into escrow contract and if it is a match the testing and payment process can begin.

Testing and compensation

To ensure that the service is provided as requested, the Data creator is able to test Data holders by sporadically asking them for a random encrypted data block. In case when the Data creator has a suspicion that the data is not available anymore or is altered in any way, it is able to initiate the litigation procedure in which the smart contract will decide if the Data holder is able to prove that it still has the data available.

Litigation procedure

The litigation procedure involves a smart contract as a validator of the service. When the Data creator is challenging the Data holder to prove to the smart contract that it is storing the agreed upon data, it sends a test question to the smart contract in a form of requested data block number. In response, the Data holder sends the requested block to the smart contract. Data creator then sends the Merkle proof for the requested data block and the smart contract calculates if the hash of requested block fits the proof.

If the proof is not valid for a data block hash there are two options - the first is that the Data holder is not storing agreed upon data, thus not being able to submit the correct answer, and the second is that the Data creator has created and submitted a false (unanswerable) test. The dilemma is solved by the Data creator sending the correct data block, that fits the already submitted Merkle proof and Merkle root hash to the smart contract. If the Data holder’s block is incorrect for the given proof than the Data holder loses it’s deployed stake and the stake is transferred to Data creator. In the other way, if Data creator is not able to prove it’s own proof than it has sent a false test and its stake is transferred to the Data holder. In case that it is proven that DH does not have the original data anymore, the smart contract will initiate the procedure of DH replacement.

Proving mechanism

The Merkle tree for data blocks <B1, B2, … , Bn> is a balanced binary hash tree where each of internal node is calculated as a SHA3 hash of the concatenated child nodes. The i-th leaf node Li is calculated as Li = SHA3(Bi, i). The root hash R of the Merkle tree is SHA3 hash of the roots child nodes. The Merkle proof for block Bi is tuple of hashes <P(0), P(1), .. , P(h−1)> where h is the height of the Merkle tree. For the proof to be valid, it needs to satisfy the tuple of tests <T(0), T(1), .. , T(h-1)> such that T(0) = SHA3(Li, P(0)) and T(i) = SHA3(P(i), T(i−1)), for i > 0, and T(h-1) = R. To prove the integrity of the answer block Bk, the smart contract calculates the hash _a = L(k) _and calculates proof T(h-1). If the proof is correct then the answer blocks integrity is unchanged from when it was created. The diagram of the proving mechanism is shown on Figure 2.

_images/merkle_proof.png

Figure 2. Merkle proof diagram

Querying data

Data consumer broadcasts a query for the data it needs through its associated node. Any DH that stores the data can reply to the broadcast. The data consumer then selects a DH by his own criteria, creates an escrow contract and deploys tokens for payment. The DH sends the encrypted data to the Data consumer, and the Data consumer randomly selects one data block to send it to the escrow contract together with the block number. After sending, the DH needs to reply with the unencrypted block, the key that was used for encryption and the Merkle path proof for proving that block is valid. If everything is valid, tokens are transferred to the DH node and the Data consumer can take the key for unlocking data.

Conclusion and further research

This document represents the first version of the incentive mechanism and is intended to illustrate network mechanics. The focus of the upcoming research in the incentive model will be on simulating the activities in the network based on a larger scale tests in real network conditions. We invite the community to provide opinions, ideas and feedback to further improve the model and document.

Installation

Read Me First

Please bear in mind that we are only able to give you limited support on testing the nodes, some features will probably change and we are aware of some bugs that may show up on different installation and usage scenarios. If you need help installing OT Node or troubleshooting your installation, you can contact us directly via email at support@origin-trail.com.

Nodes can be installed on several ways:

NOTE: For best performance testing we recommend usage of services like Digital Ocean.

In order to install OT node, you should do following steps:

  1. Step 1 - Install all prerequisites. There is an automatic installation script or you can do installation manually (explained below).
  2. Step 2 - Node Configuration
  3. Step 3 - Import data

Prerequisites

System requirements

  • Minimum of 2Gb of RAM memory
  • Minimum of 5Gb of storage memory
  • Ethereum wallet and some Ether on Rinkeby Test Network (You can see wallet setup instructions here Wallet Setup)

Manual Prerequisites Installation

NodeJS

If you don’t have Node.js installed head to https://nodejs.org/en/ and install version 9.x.x.

Note: Make sure you have the preciselly above specified version of Node.js installed. Some features will not work well on versions less or greater then 9.x.x.

Before starting, make sure your server is up-to-date. You can do this with the following commands:

sudo apt-get update -y
sudo apt-get upgrade -y
curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
sudo apt-get install -y nodejs
Database - ArangoDB

ArangoDB is a native multi-model, open-source database with flexible data models for documents, graphs, and key-values. We are using ArangoDB to store data. In order to run OT node with ArangoDB you need to have a local ArangoDB server installed and running.

Head to arangodb.com/download, select your operating system and download ArangoDB. You may also follow the instructions on how to install with a package manager, if available. Remember credentials (username and password) used to log in to Arango server, since later on you will need to set them in .env.

Ubuntu 16.04
wget https://www.arangodb.com/repositories/arangodb3/xUbuntu_16.04/Release.key
sudo apt-key add Release.key
sudo apt-add-repository 'deb https://www.arangodb.com/repositories/arangodb3/xUbuntu_16.04/ /'
sudo apt-get update -y
sudo apt-get install arangodb3

When asked, enter the password for root user.

Mac Os X

For Mac OS X, you can use homebrew to install ArangoDB. Run the following:

brew install arangodb
Database Setup

Once you installed ArangoDB you should create a database. Enter ArangoDB shell script

arangosh

and create database

db._createDatabase("origintrail", "", [{ username: "otuser", passwd: "otpass", active: true}])
Database - Neo4j

Neo4j is a graph database management system with native graph storage and processing. Its architecture is designed for optimizing fast management, storage, and the traversal of nodes and relationships. In order to run OT node with Neo4j make sure to have it installed and running.

Head to neo4j.com/download, select your operating system and download Neo4j. You may also follow the instructions on how to install with a package manager, if available.

Ubuntu 16.04

First you have to install Java 8 and set it as the default.

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo apt-get install -y oracle-java8-set-default

Run the following:

wget -O - https://debian.neo4j.org/neotechnology.gpg.key | sudo apt-key add -
echo 'deb https://debian.neo4j.org/repo stable/' | sudo tee /etc/apt/sources.list.d/neo4j.list
sudo apt-get update
sudo apt-g

Automatic installation

This will install all prerequisites in a single step.

wget https://raw.githubusercontent.com/OriginTrail/ot-node/master/install.sh
sh install.sh --db=arangodb

If you prefer neo4j as database then use

wget https://raw.githubusercontent.com/OriginTrail/ot-node/master/install.sh
sh install.sh --db=neo4j

Note: There are some ongoing issues with Neo4j. We currently advise use of ArangoDB.

If errors occurred during installation process, ot-node probably won’t work properly. Errors during installation process happen due to various factors like lack of RAM or previous installations. We strongly recommend installation on clean system and at least 2GB of RAM (it may work with 512MB and swap file). You can check this link and do the automatic installation and setup again.

If you used this automatic installation script, you may proceed to Node Configuration. Then you can start the node.

Manual Node Installation

Clone the repository

git clone -b master https://github.com/OriginTrail/ot-node.git

and run npm

cd ot-node && npm install
cp .env.example .env

You can proceed to node configuration

Starting The Node

OT node consists of two servers RPC and Kademlia node. Run both servers in a single command.

npm start

You can see instructions regarding the data import on the following Import data

Important Notes

First time you run your node run npm run bootstrap to apply initial configuration.

Every time you change your configuration in .env don’t forget to run npm run config to apply updated configuration.

In order to make the initial import, your node must whitelist the IP of the machine that is requesting the import in .env i.e IMPORT_WHITELIST=127.0.0.1 if you are importing from localhost.

Wallet Setup

You can check out the video for this section.

TRAC token that is required for OriginTrail Decentralized Network is based on ERC20 standard that utilizes Ethereum network. In order to utilize the ODN you will need both TRAC and ETH tokens on your wallet. ERC20 standard enables use of one wallet for both crypto currencies.

ODN testnet is based on Ethereum test network Rinkeby and Alpha Trac tokens (ATRAC). We strongly advise use of dedicated wallet for ODN TestNet for security reasons. Please don’t store real TRAC tokens and ETH on that wallet in this stage. ATRAC and Rinkeby ETH have no commercial value.

OriginTrail is not responsible for setup and use of wallets and related software. We also do not provide support for this software.

If you need ATRAC token, please send us email on support@origin-trail.com with your wallet address. You can also visit our faucet faucet

Metamask

The easiest way to do this is to install MetaMask. MetaMask is a Chrome Extension that allows you to interact with the Ethereum network without having to run a node on your system.

You can download MetaMask from metamask.io.

Install the Chrome plugin. This adds a little fox icon to your extension bar. Click on the icon and follow instructions to accept disclaimer and terms & conditions.

When you click on the top left “Ethereum Main Net” button of the MetaMask window you be able to select the node you want to connect to. Please select Rinkeby Test Network.

Wallet

Once MetaMask is connected, you will have to enter a new password to encrypt your new MetaMask vault that will hold your keys generated by MetaMask. You will get a new wallet address.

In order to copy your wallet address, click on three dots on top right corner and select Copy Address to clipboard. You can use it further to get some test Ether from the faucet.

To get a private key you can click on the same menu and select Export Private Key.

Alternatively, if you need to create a new wallet and associate it to MetaMask, you can go to https://www.myetherwallet.com/ and follow the procedure. In the last step just select Connect to Metamask.

Get some funds on the wallet

In order to test you will need some Ether on our wallet. Initially your balance will be 0. To get some Ether to test go to https://faucet.rinkeby.io/ and follow the instructions. Remember, test network Ether does not have any exchange value, it’s just for testing and development purposes.

Once you get Ether on your account, you will be able to see new balance in MetaMask.

Node Configuration

.env File Configuration

Go into ot-node folder (if you are not already there) with cd ot-node

Open .env in your favorite editor and set the configuration parameters or type this in console: nano .env

Make sure that username and password you set when installing your preferred database are set as values of respective database USERNAME and PASSWORD.

You can view .env Example here

Once you set your .env file, you have to run from ot-node folder decide on one of following options:

  • if this is your initial configuration:
npm run bootstrap
  • if you are updating node setup (node was working already):
npm run config

Every time you change your configuration in .env don’t forget to run this command to apply that configuration.

Import data

In order to make the initial import there are prerequisits:

  • your node must whitelist the IP of the machine that is requesting the import in .env i.e IMPORT_WHITELIST=127.0.0.1 if you are importing from localhost.
  • data has to be properly formatted. Further description of data standards can be found in Data Structure Guidelines.
  • Node installation includes example files in the project for your reference and testing. If you want to test import on them, just replace wallet in the file with wallet of your node.

Once data is ready, import process can be executed on several ways ODN:

  • Please check API for detailed instuctions.
  • You can also use Houston App interface for data import.

Depending on the use case, we strongly recommend to set up a periodic import of the data into ODN (i.e. a daily cronjob exporting the data from your ERP in the data format requested).

Introduction to API

The purpose of this API is to allow data operations on a single node you trust, in order to control the data flow via API routes. For example, importing a single data file into a node’s database, replicating the data on the network or reading it from the node.

Prerequisites

Your node must be running and you must have a properly set up a listening address and import address whitelist. Your listening address, as well as the import whitelist can be set in the .env file before running the node for the first time. For instructions on how to configure the node, check Node Configuration page.

Import

Importing the data to ODN is executed in several steps:

  1. /api/import with data as parameter is importing the data from data source to local graph database on DC node. Data import_id is returned to data creator as a parameter in response.
  2. /api/replication/ with import_id as input parameter is replicating the data from local graph database on DC node to other DH nodes via bidding mechanism.

After all steps are completed, imported data can be read from ODN via read procedure.

/api/import POST

Import new GS1 or WOT structured data into a node’s database. Find out more about data format here Data Structure Guidelines

Parameters

POST http://NODE_IP:PORT/api/import
Name Required Type Description
importfile true file or text data that you want to import (ex. GS1 XML file)
importtype true predetermined text GS1 or WOT. This describes data standard.

If successful returns import_id for this data import.

Example:

{
    "importfile": "importfile=@/ot-node/test/modules/test_xml/GraphExample_1.xml"
    "importtype": "GS1"
}

Responses

201 File was successfully imported.

Example:

{
    "import_id": "0x477eae0227cce0ffaadc235c7946b97cbe2a948fe7782796b53a0c5a6ca6595f"
}

400 Invalid import parameters (importfile/importtype)

Example:

{
     "message": "Invalid import type"
}

405 Invalid input

/api/import_info GET

List detailed informations about specific imported data.

Parameters

GET http://NODE_IP:PORT/api/import_info?import_id={{import_id}}
Name Required Type Description
import_id true text import_id returned after calling /api/import endpoint

If successful returns detailed informations about desired imported data.

Responses

200 Data about desired import sucessfully retrieved.

Example:

{
    "edges": [...],
    "vertices": [...]
    "import_hash": "0xe9c407ce686b68bedf758f9513143fbe7c08be12cfc76aa28bf4710e303f4a94",
    "root_hash": "0xea94a2abc75f5e03ff4e5db8adbd2dad26ce9f4d6fa6db6e880891005a715c26",
    "transaction": "0x3711223191991a684ffae3c8672491b1cd74520ca69b91969211d671489f2e71"
}

404 Bad request. Invalid input parameter (import_id)

Example:

{
    "message": "Import data for import ID  does not exist"
}

500 Internal error happened on server.


Replication

Replication initiates an offer for a previously imported data set on the blockchain. On success the API route will return the ID of the offer which later can be used to query the status of the created offer. After calling the replication API route, the offer itself will be executing in the background and the node will monitor the offer statuses and bids that other DH nodes are creating as a response to the offer. Please keep in mind that the offer depends on the input parameters setup in the node, which may result in a long bidding time.

For checking the status of the replication request, see /replication/ :{replication_id} route

/api/replication POST

Creates an offer and trigger replication.

Parameters

POST http://NODE_IP:PORT/api/replication
Name Required Type Description
import_id true text ID of the import you want to replicate
total_escrow_time_in_minutes false number
Total time of the
replication in minutes
max_token_amount_per_dh false text
Maximum price per
DH in token’s wei
dh_min_stake_amount false text
Minimun stake ammount per
DH in token’s wei
dh_min_reputation false number
Minimum required
reputation (0 is the lowest)

This call returns replication_id which can be used in /api/replication/ :{replication_id} as input parameter for checking the status of replication.

You can obtain import_id as a response from /api/import or you can view this value as a vertex attribute in graph database.

Example:

{
    "import_id": "0x7e9d30d3f78fd21180c9c075403b1feeace7fbf10e10ad4184dd8b7e38358bc6"
}

Responses

200 OK

Example:

{
  "replication_id": "60bc3cd1-9b2c-4e12-b59a-14405ec73ce5"
}

400 Import ID not provided

405 Failed to start offer

/api/replication/ :{replication_id} GET

Gets the status of the replication with replication_id.

Parameters

GET http://NODE_IP:PORT/api/replication/ {replication_id}
Name Required Type Description
replication_id true text replication ID for initiated import

Returns one of the statuses for the replication:

  • PENDING: preparation for offer creation (depositing tokens) on the blockchain
  • STARTED: replication is initiated and the offer is being written on the blockchain and started waiting for bids
  • FINALIZING: the offering time ended and proceeded with choosing bids
  • FINALIZED: the offer is finalized on the blockchain and bids choosen
  • CANCELLED: the previously created offer was canceled
  • FAILED: the offer has failed

You can obtain replication_id as a response from call of POST /api/replication.

Responses

200 OK

Example:

{
    "offer_status": "PENDING"
}

400 Replication ID is not provided

404 Offer not found


Read

Reading the data from ODN can be performed on 2 ways:

  • Network read is conducted over ODN in several steps. The node that is executing the call is medium that is conducting read procedure (not the actual data source). This type of read costs TRAC and ETH tokens. Data is acquired from the DH node that provides the bid that meets requirements by the node which requested the data. At the end of network read procedure, newly acquired data is permanently available in local node graph database. Network read has several steps that need to be executed:
  1. api/query/network where query is sent across ODN to get data. This call returns QUERYID
  2. from /api/query/QUERYID/responses call you can see what are the responses for the query in previous call.
  3. /api/read/network with parameters from previous calls will return the data and execute all required token transfers.
  • Local read from local graph database on the node. The node that is executing the call is the data source. This type of read does not cost any TRAC (or ETH) tokens. It is trusted read from the node that provides the data. That data can be different then data on ODN. Potential difference can be examined by litigation procedure.

/api/query/network POST

Publishes a network query for a supply chain data set using simple specific DSL query. The API route will return the ID of the query which can be used for checking the status of the query. The actual quering of the network will last approximately about 1 min, in which period the node will gather the offers for the query responses (read operation) and store them in the internal database storage.

The query must be in JSON format:

{
    "query":
        [
            {
                "path": "<SOME_ID>",
                "value": "<SOME_VALUE>",
                "opcode": "<OPERATOR>"
            }, ...
        ]
}

Supported operators are:

  • EQ: when ID equals Value
  • IN: when ID is in Value

Refer to /query/network/{query_param} GET

POST http://NODE_IP:PORT/api/query/network
Name Required Type Description
query true DSL query DSL query for data on ODN

Example:

{  "query":  [{
    "path": "identifiers.id",
    "value": "urn:epc:id:sgln:Building_1",
    "opcode": "EQ"
}]
}

Responses

200 Always, except on a internal server error or bad request. Body will contain message in JSON format containing at least ‘status’ and ‘message’ attribute. If query was successful additional attribute ‘query_id’ will be present which will contain UUID of the query which can be used to check the result or status of the query.

400 Bad request

500 Internal error happened on server.

/api/query/{query_id}/responses GET

Returns the list of all the offers of the given query. The response will be formatted in an array of JSON objects containing offer details.

GET http://NODE_IP:PORT/api/query/{query_id}/responses
Name Required Type Description
query_param true text UUID of network query

Responses

200 Always, except on a internal server error. Body will contain message in JSON format containing at least ‘status’ and ‘message’ attribute. ‘message’ will contain the status of the query in format Query status $status.. If status is FINISHED body will contain another attribute ‘vertices’ containing all query result vertices.

500 Internal error happened on server.


/api/query/network/{query_param} GET

Checks the status of the network query

The network query can have the following status:

  • OPEN: the initial status of the query which means it has been published to the network
  • FINISHED: the query has been completed, the required time has elapsed, and the offers must be reviewed via the route …
  • PROCESSING: the selected offer is currently being processed
  • FAILED: in case of error and a failed query
GET http://NODE_IP:PORT/api/query/network/{query_param}
Name Required Type Description
query_param true text UUID of network query

Responses

200 Always, except on a internal server error. Body will contain message in JSON format containing at least ‘status’ and ‘message’ attribute. ‘message’ will contain the status of the query in format Query status $status.. If status is FINISHED body will contain another attribute ‘vertices’ containing all query result vertices.

500 Internal error happened on server.


/api/read/network POST

Initiates the reading from the network node selected from the previous posted reading offer.

POST http://NODE_IP:PORT/api/read/network
Name Required Type Description
query_id true text ID of the query
reply_id true text ID of the reply
import_id true text ID of the import

Example:

{
     "query_id": "76141d3e-378f-4a9a-8b43-d24f8982ef2e",
     "reply_id": "fdb5e3ba-9fb0-4a86-910e-110e4b8abd5f",
     "import_id": "0xe1f05500c1352309e009aaf77f589b4b62b895908da69d7c90ebc5d5c05cf372"
}

Responses

200 OK

400 Bad request

Example:

{
    "message": "Invalid import type"
}

Local read

/api/query/local POST

Run local query on the database

POST http://NODE_IP:PORT/api/query/local
Name Required Type Description
query true text DSL query

Returns data from local graph database for requested query.

Example

{
    "<IDENTIFIER_KEY1>": "<VALUE1>",
    "<IDENTIFIER_KEY2>": "<VALUE2>", ...
}

Responses

200 Array of found vertices for given query

204 No vertices found

500 Internal error happened on server

/api/query/local/import POST

Queries a nodes local database and returns all the import IDs that contain the results of the query.

POST http://NODE_IP:PORT/api/query/local/import
Name Required Type Description
query true JSON query Query object

The query must be in JSON format:

{
    "query":
        [
            {
                "path": "<SOME_ID>",
                "value": "<SOME_VALUE>",
                "opcode": "<OPERATOR>"
            }, ...
        ]
}

Supported operators are:

  • EQ: when ID equals Value
  • IN: when ID is in Value

Responses

200 Array of found vertices for given query

204 No vertices found

400 Bad request

500 Internal error happened on server

/api/query/local/import:{import_id} GET

Returns given import’s vertices and edges and decrypts them if needed.

GET http://NODE_IP:PORT/api/query/local/import/{import_id}
Name Required Type Description
import_id true text Import ID attribute

Import ID for example: 0x477eae0227cce0ffaadc235c7946b97cbe2a948fe7782796b53a0c5a6ca6595f

Responses

200 OK

400 Param required



Profile Token Management

/api/deposit POST

Deposit tokens from wallet to a profile.

POST http://NODE_IP:PORT/api/deposit
Name Required Type Description
query true JSON query Query object

The query must be in JSON format:

{
    "atrac_amount": 10
}

Responses

200 Successfully deposited 10 ATRAC to profile

400 Bad request

Note: value of 10 used above is just an example, can be any available amount from wallet

/api/withdraw POST

Withdraw tokens from profile to a wallet.

POST http://NODE_IP:PORT/api/withdraw
Name Required Type Description
query true JSON query Query object

The query must be in JSON format:

{
    "atrac_amount": 10
}

Responses

200 Successfully withdrawn 10 ATRAC to wallet your_wallet_id

400 Bad request

Note: value of 10 used above is just an example, can be any available amount from profile


Implementation

Setting up a Project on OriginTrail

Here is the short overview how to set up a project on ODN. This document is aimed towards enterprise and solution provider entities.

Step 1: Structuring a use case

Defining, structuring and formalizing use case is the most important step for the implementation as it drives all the other steps forward. In this step it is important to define the process flow, all the goals, all the stakeholders and all the data needed for OriginTrail. It is important to define which data will be utilized and visible on the end and in what way.

  • Things needed for consideration when structuring a use case:
  • Define challenges of the problem and the solution proposition
  • Define the short and long term vision and what exactly is the purpose of the solution
  • Define who are the participants of the solution and their scope
  • Define key value propositions for every individual in the supply chain
  • Outline the process, describe formal process flow steps and identify data points that will trigger events for OriginTrail protocol
  • Define OriginTrail features that will be used (Consensus check, Zero Knowledge Protocol…).
  • Define the data sets that is required for processing
  • Define who is beneficiary of the end solution (who will use the data stored on ODN)
  • How will the data on ODN be utilized on frontend solution

The project should be structured into phases to ensure smooth transition of the project and to test the initial ideas set in the use case. There are three important phases in this process:

  • POC Exploration - Make proof of concept for proving the validity of the use case
  • Test Phase - Test the project with larger set of stakeholders
  • Live phase - Monetize the solution

Each of these phases need to have defined steps for easier tracking of the project, where the important aspects consist of:

  • Stakeholders - Who is involved in each phase
  • Responsibilities - what is the role of each stakeholder and what activities each needs to perform
  • Required data - What is the required data each stakeholder needs to provide
  • Time-frame of the phase - How long each phase takes

For the POC phase it is important that is structured accordingly and after all the aspects of the use case have been defined, the project can move forward to next steps.

Step 2: Getting raw data and structuring the sample files

Once you identify which data you need along the process described in use case, primary goal is to align the data structure from all the stakeholders in the supply chain. Since different supply chain entities use different IT systems (SAP, Navision, Oracle,…) these data sets must be unified, in order to take advantage of the relational nature of the supply chain data.

These data sets must be structured according to OriginTrail Implementation Kit. If data source is ERP system of entity then GS1 EPCIS data standard should be used. The output of this step is an structured sample XML file. If data source are IoT devices, then Web of Things data standard should be used (EPCIS is also valid).

Steps in the process:

  • Identify data source and data owner (ERP system or IoT devices)
  • Acquire the data samples from each data source from each stakeholder along the supply chain in data points that are defined in use case
  • Define and resolve data privacy issues (what data is sensitive, what should be encrypted etc.)
  • Each company must structure data according to the instructions in the Implementation Kit, provided by OriginTrail.

Outcome of this step is a sample file which is should an generated with export integration procedure from the existing company IT system

Step 3: Installing and setting up a node

Node requirements and installation steps are specified in the implementation kit. This step also includes setting up the wallet for Ethereum and TRAC tokens. Installation sets up a node software and makes it part of ODN network. This step requires intermediate IT skills.

Steps in the process:

  • Setting up a dedicated server
  • Setting up a Node software
  • Setting up a wallet

Step 4: Putting the data through Node

In this step you need to put the sample data from step 3, to the installed node from step 4. Once you do that, the protocol takes care of the replication part and the Blockchain connection. In this way, your data will be stored in a cost efficient, decentralised way with a connection to Blockchain layer (Data is on ODN).

Important part of this step is that the company that wishes to put data on ODN, will need TRAC token to compensate the node holders (where the data will be stored). Price of the transaction will be determined by the node holders based on the volume of data and the time needed for data storage.

Step 5: Integration of OriginTrail with enterprise IT system

Integration is establishing communication from participants ERP to ODN. Communication is done through exchange of standardized and structured files. Exchange can be done through upload of the file by using web services or manually. Since data structure is determined in previous stages, the only thing left to do is to extract the data into that data structure, send it to ODN through Node and automate the whole process.

Steps in the process: - Generating file from data source to standardized format - Implementation of procedure that triggers web services that send or receive the files from ODN - Testing and evaluating the procedures according to use case

Step 6: Utilization of the data on ODN

After the data is stored on ODN (OriginTrail Decentralised Network), a frontend user interface must access, process and display the data. This step brings the project back to initial use case structuring. This part should be outlined on the beginning of the project as it visualises, what the end beneficiary of the data is seeing. Since each use case needs is unique and requires its own interface, OriginTrail provides API for accessing the data, but does not develop front end solutions (user interface such as mobile app or web site). This is in the domain of the service providers.

Things needed for consideration when structuring a use case: - Who is the beneficiary of the data - How the data will be presented to the data beneficiary - What kind of user interfaces are needed (web, mobile)

Data Structure Guidelines

This page illustrates how to structure data for OriginTrail protocol to utilize data sharing and connectivity functionalities of the Alpha version of the OriginTrail Distributed Network (ODN) as part of Data Layer. It corresponds to attachments and examples that are provided with this file and the documentation on GitHub. The defined structure was developed on the best practices observed over the last several years of experience in implementing transparency solutions within the food supply chain industry, and is based on GS1 standards.

Problem definition

OriginTrail is a protocol that enables exchange of standardized data among disparate IT systems in multi organizational environments in a tamper proof way. Therefore, each participant of the exchange should provide their data in a common and standardized format. To utilize the OriginTrail protocol to showcase its full potential, the communication from ERP systems of at least two entities within a supply chain towards ODN needs to be established. The communication is done periodically by sending standardized XML files to the OriginTrail network node, the structure of which will be further explained in this document.

Also, OriginTrail stores and processes data that is generated and received via IoT devices. This data usually is not stored within ERP system. It is processed through designated software on read points into JSON data format and then it is processed further. This type of data can be handled with GS1 EPCIS standard and Web Of Things (WoT) standard. Regardless which standard is used, ODN will process the data and store it in graph database. If structured properly, related data from two data sources (formats) will be interconnected.

The upload of data in the XML format to the OriginTrail protocol node is performed via the web-service endpoint. The process of extracting data from the ERP system, including its periodic forwarding to the OriginTrail node API endpoint is out of the scope of this document. This document focuses on the standardized data structure that data creator nodes can process via their importer.

Types of data structure

OriginTrail is primarily focused on GS1 data standards, but other standards will also be supported.

  1. GS1 EPCIS standards.
  2. Web of things.

General data structure guidelines

  1. Research and choose proper standa ODN according to defined use case
  2. Choose data structure and download sample files
  3. Edit sample files to match the use case.
  4. Validate samples.
  5. Create integration procedure that generates standardized files from data source and sends them to DC node.

GS1 EPCIS structure

The goal of EPCIS is to enable disparate applications to create and share visibility event data, both within and across enterprises. Ultimately, this sharing is aimed at enabling users to gain a shared view of physical or digital objects within a relevant business context.

The file structure is based on XML data format. Data structuring is performed with use of official guidelines (EPCIS and CBV). Syntax and XML node structure fully corresponds to GS1 standards and provided XSD schemes. OriginTrail has introduced namespace (urn:ot:* ) for custom identifiers. OriginTrail namespace primarily introduces standardization of attributes that will be used in graph vertexes, while values should be according to GS1 namespace.

EPCIS data structuring guidelines

  1. Research GS1 implementation guide and OriginTrail guidelines.
  2. Collect the data samples that you want to store on ODN according to defined use case from official implementation guide and our documentation and download XSD schemes.
  3. Connect XSD scheme and sample file in some advanced XML editing software
  4. Modify sample file according to your data architecture and validate changes against XSD file.
  5. Map your data structure with sample file.
  6. Generate file from your ERP and evaluate the data against XSD file.
  7. Send file to ODN via API through DC node.

GS1 EPCIS XML File structure

The XML file contains three main logical parts document data, master data and visibility events data. All parts of the file must be according to GS1 XSD scheme.

The example XML files, as well as the XSD scheme that can be found here. Example files are organized in several folders. Each folder represents one use case scenario that has several events that are described in details. Scenarios are described below in this document.

We strongly advise use of advanced XML editing software that will verify if your data structure corresponds to XSD scheme that is proposed by GS1 if you want to have full compliance of the data structure.

Document data

EPCIS guideline suggests “Standard Business Document Header” SBDH standard for description of the document data. This part of data is in EPCIS Header part of the file. It has basic information about file (sender,receiver,ID,purpose…). Although OriginTrail is the receiver of the file and it can be named as receiver (SBDH allows defining multiple receivers) it is not necessary to include this. Receiver is some entity involved in a business process, not in the data processing.

Master data

EPCIS standard describes 4 ways to process Master data. OriginTrail is going to support the most common way - including Master data in the Header of an EPCIS XML document. Master data will be processed separately from other data. There is no need to include master data in every file. Only data that has not been sent to ODN until that moment should be provided. If there is already the same version of master data on ODN, it will be omitted and not processed further. But if there is visibility event data that is related to some master data that is not on ODN, there can be query problems in graph database (the file is going to be processed on ODN anyway).

More information about data Namespace.

Visibility event data

Main focus of EPCIS standard is formalizing description of event data that are generated by activities within supply chain. OriginTrail is focusing on ObjectEvent, AggregationEvent and TransformationEvent that are thoroughly described in the standard (although other event types are also supported). We strongly advise to read GS1 EPCIS implementation guideline and to evaluate our example files.

Event data describes interactions between entities described with master data by the data creator. OriginTrail distinguishes two types event data:

  1. Internal events are related to processes of object movements or transformations (production, repackaging etc) within the scope of one supply chain participants business location (read point) as part of some business process. For example, this could be production or assembly transactions that result in product output for further production or sale (repackaging, labeling etc). Ownership of objects does not change during event. Consensus check is not necessary.
  2. External events are related to processes between different supply chain participants (sales/purchases, transport). They represent processes where the jurisdiction or ownership of the objects gets changed in the supply chain. This type of events should use consensus check.

Each event should have a unique ID that connects GS1 event with corresponding ERP transaction in database from the data creator. The event data implies that the provider of that data is one of the active participants in the transaction process.

OriginTrail Extension section

EPCIS standard allows extensions of their data set. Please read namespace section for more details. Currently OriginTrail protocol requires following extensions:

  • OTEventClass and OTEventType - It corresponds to Event Classes described in data layer model. .
  • documentID - Value represents key for consensus check between participants. One event can have several documents in Business Transaction List, but only the documentId value will be used for link between two events that are described by different entities. DocumentId mapping must be predetermined so supply chain participants can know how to trigger consensus check.
  • Source and Destination - this GS1 tags are part of the extension section and they are utilized by OriginTrail to determine which parties are involved in consensus check. Also, if EventType is Ownership, then ownership of products will be transferred from source to destination.

Providing XML structured data to OriginTrail Decentralized Network

To start integration with OriginTrail, a periodic upload of the appropriately structured XML file (according to the XSD scheme above) should be set up. Please check https://github.com/OriginTrail/ot-node/wiki/Installation-Instructions for further details.

XML EPCIS Examples

Provided examples describe proposed data structure and data flow. The main goal is to elaborate data structuring process and features on ODN. We have set out simple Manufacturer-Distributor-Retail (MDR) supply chain where goods move only forward.

Supply chain consists of 4 entities:

  • Green - Manufacturer of wine
  • Pink - Distributor of beverages
  • Orange and Red - Retail shops

For clarity and analysis examples deal with generic items called Product1 and generic locations (with generic read points). Real life use cases should utilize GS1 identifiers for values (GLN,GTIN…). For example, instead value urn:epc:id:sgln:Building_Green there should be GLN number like urn:epc:id:sgln:0614141.12345.0.

1. Basic sales example

Supply chain participants map:

_images/Basic_sale.jpg

Use case: Green is producing wine and selling it to Pink. Shipping and receiving events are generating data that is being processed on ODN.

GS1 EPCIS design:

_images/Design.JPG

Sample files

2. Complex manufacturer-distributor-retail (MDR) sale

Supply chain participants map:

_images/MDR.jpg

Use case: Green is producing wine and selling it to Pink. Pink is distributing (selling) wine to retail shop (Orange). Batches on Pink are sold partially. Shipping and receiving events are generating data that is being processed on ODN.

GS1 EPCIS design:

_images/DesignMDR.JPG

Sample files

3. MDR with zero knowledge proof

Supply chain participants map:

_images/MDR.jpg

Use case: Green is producing wine and selling it to Pink. Pink is distributing (selling) wine to retail shop (Orange). Batches on Pink are sold partially. Zero knowledge proof for mass balance must be utilized. Shipping and receiving events are generating data that is being processed on ODN.

Note: This scenario utilizes Zero knowledge proof unlike previous scenario. There are additional steps and constraints when this feature is utilized. Purpose of this scenario is to point out differences in data structure. There are minor differences in quantity being sold from Pink (some quantity is left unsold on Pink location).

GS1 EPCIS design:

_images/DesignMDRZk.JPG

Sample files

4. MDR with aggregation events

Supply chain participants map:

_images/MDRagg.jpg

Use case: Green is producing wine (one product with several batches). Products are packed on pallet. One pallet can have several batches. Green is selling products to Pink. Pink is distributing (selling) wine to retail shop (Orange). The wine is sold in pallets that are not changed on Pink location. Pink is handling pallets as atomic product (nothing is added or removed from pallet). Pink is selling wine pallets to Orange. Orange unpacks pallets when they receive them. They unpack only Batch2 from the Pallet. Pallets can be partially or completely unpacked. Shipping, receiving, packing and unpacking events are generating data that is being processed on ODN.

Note: Unpacking event must explicitly state what is being unpacked in order to connect vertexes in graph database. Please don’t use unpack all `` <childEPCs /> `` tag. Also, pay attention to ADD and DELETE action that signal the type of the observed event.

GS1 EPCIS design:

_images/DesignMDRAgg.JPG

Sample files

Namespace

The EPCIS standard places general constraints on the identifiers that End Users may create for use as User Vocabulary elements. Specifically, an identifier must conform to URI syntax, and must either conform to syntax specified in GS1 standards or must belong to a subspace of URI identifiers that is under the control of the end user who assigns them.

OriginTrail namespaces are mostly used for attributes definitions in EPCIS based XML file. If it is possible, values for that attributes should be based on GS1 identifiers .

For more information check:

For more information about data structuring in OriginTrail check Data Structure Guidelines

GS1 CBV Namespace

The Core Business Vocabulary (CBV) provides additional constraints on the syntax of identifiers for user vocabularies, so that CBV-Compliant documents will use identifiers that have a predictable structure. This in turn makes it easier for trading partners to understand the meaning of such identifiers. GS1 Namespace should be used always before OriginTrail namespace if it is possible.

OriginTrail Namespace

OriginTrail created it’s own namespace as extension of GS1 namespace. Root of the name space is urn:ot:. We strongly advise use of our namespace as described below. OriginTrail protocol name space is based on Data Layer. If OriginTrail namespace is not used properly it can cause deviations in data structure. We strongly recommend data validation in Graph database after first iterations.

Object urn:ot:object

Objects represent all physical or digital entities involved in events in the supply chain. Examples of objects are vehicles, production facilities, documents, sensors, personnel etc. ObjectClasses specifically define a global set of properties for their child Objects (as their “instances”). In the example of a wine authenticity use case, the data shared among supply chain entities (winery, distributors, retailers etc) involves information about specific batches of bottles with unique characteristics. The master data about a product would present an ObjectClass node in the OT graph, while the specifics about the product batch would be contained within the “batch” Object. This allows for a hierarchical organization of objects, with a simplistic but robust class-like inheritance.

Actor urn:ot:object:actor

Actors, which encompass companies, people, machines and any other entity that can act or observe objects within the supply chain. (the “Who”).

urn:ot:object:actor:id

ID represents unique ID of that entity in their supply chain.

urn:ot:object:actor:name

Name is short string description of entity. It can represent full or partial company name or full name of person.

urn:ot:object:actor:description

Description is long textual description of the entity that provides broader context.

urn:ot:object:actor:category

Category represents the type of the entity. Please try to choose value from list below:

  • Person
  • Company
  • Other
urn:ot:object:actor:wallet

Wallet represents Ethereum wallet of the entity that is used for their DC node.

Product urn:ot:object:product

Products (supply chain objects of interest), which represent goods or services that are acted upon by actors (the “What”). Product is metadata entity for Batch (batch is physical instance of product).

urn:ot:object:product:id

ID represents unique ID of that entity in their supply chain.

urn:ot:object:product:description

Description is long textual description of the entity that provides broader context.

urn:ot:object:product:category

Category represents the type of the entity. List of product classification can be found on GS1 GPC standards. If the product is wine, then value for category can be Beverage that is in GPC list as part of the wider Food/Beverage/Tobacco category.

Batch urn:ot:object:product:batch

Batch represents physical instance of a product. Batch represents minimal subset of products that have same characteristics. There are two main types of batches that GS1 handles - lots and serial numbers. More information about GTIN can be found here.

urn:ot:object:product:batch:id

ID represents unique ID of that entity in their supply chain.

urn:ot:object:product:batch:productId

productId represents explicit relation with batch and product. Value of this attribute should contain value from attribute urn:ot:object:product:id in corresponding entity.

urn:ot:object:product:batch:productionDate

Date when batch was produced.

urn:ot:object:product:batch:expirationDate

Date after which a product (such as food or medicine) should not be sold because of an expected decline in quality or effectiveness.

Location urn:ot:object:location

Locations, which define either physical or digital locations of products or actors (the “Where”).

urn:ot:object:location:id

ID represents unique ID of that entity in their supply chain.

urn:ot:object:location:category

Category represents the type of the entity. Please try to choose value from list below:

  • Building
  • Readpoint
  • Vehicle
  • Other
urn:ot:object:location:description

Description is long textual description of the entity that provides broader context.

urn:ot:object:location:actorId

productId represents explicit relation with location and actor. Value of this attribute should contain value from attribute urn:ot:object:actor:id in corresponding entity. Value proclaims which actor is the owner of that location.

Event urn:ot:event

Transport urn:ot:event:transport

Transport events, which explain the physical or digital relocation of objects in the supply chain.

Transformation urn:ot:event:transformation

Transformation events, which contain information about the transformation of one or more objects into (a new) one. An example would be the case of an electronic device (i.e. mobile phone), where the assembly is observed as a transformation event of combining different components – Objects - into one output Object, or the case of combining a set of SKUs in one group entity such as a transportation pallet. Similarly, a digital transformation event would be any type of processing of a digital product (i.e. mastering of a digital sound recording). This event type corresponds to GS1 AggregationEvents and TransformationEvents.

Observation urn:ot:event:observation

Observation events, which entail any type of observational activity such as temperature tracking via sensors or laboratory tests. This event corresponds to GS1 ObjectEvents that are published by one party (interaction between different business entities is not the primary focus of the event).

Ownership urn:ot:event:ownership

Ownership/custody transfer events, where the change of ownership or custody of Objects is distinctly explained. An example would be a sale event. Consensus check is only triggered on Ownership events by documentID key value between Source and Destination owners.

Extension

GS1 EPCIS standard allows custom extensions in Event section. OriginTrail has following tags:

  • OTEventClass - values can be one of urn:ot:event members of namespace. 1:N tags are allowed.
  • OTEventType - value is string that describes process. 1:N tags are allowed.
  • documentID - value represents key for consensus check between participants. One event can have several documents in Business Transaction List, but only the documentId value will be used for link between two events that are described by different entities

Contribution Guidelines

We’d love for you to contribute to our source code and to make OT protocol better than it is today! Here are the guidelines we’d like you to follow.

If you’re new to OT node development, there are guides in this wiki for getting your dev environment set up. Get to know the commit process with something small like a bug fix. If you’re not sure where to start post a message on the RocketChat #development channel.

Once you’ve got your feet under you then you can start working on larger projects. For anything more than a bug fix, it probably makes sense to coordinate through the RocketChat, since it’s possible someone else is working on the same thing.

Please make descriptive commit messages.

The following checklist is worked through for every commit:

  • Check out and try the changeset.
  • Ensure that the code follows the language coding conventions.
  • Ensure that the code is well designed and architected.

Pull Requests

If you report an issue, we’d love to see a pull request attached. Please keep in mind that your commit may end up getting modified. Sometimes we’ll make the change ourselves, but often we’ll just let you know what needs to happen and help you fix it up yourself.

Contributor Code of Conduct

As contributors and maintainers of the OT Node project, we pledge to respect everyone who contributes by posting issues, updating documentation, submitting pull requests, providing feedback in comments, and any other activities.

Communication through any of our channels (GitHub, RocketChat, Twitter, etc.) must be constructive and never resort to personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.

We promise to extend courtesy and respect to everyone involved in this project regardless of gender, gender identity, sexual orientation, disability, age, race, ethnicity, religion, or level of experience. We expect anyone contributing to the project to do the same.

If any member of the community violates this code of conduct, the maintainers of the OT Node project may take action, removing issues, comments, and PRs or blocking accounts as deemed appropriate.

If you are subject to or witness unacceptable behavior, or have any other concerns, please email us.

Questions, Bugs, Features

Got a Question or Problem?

Do not open issues for general support questions as we want to keep GitHub issues for bug reports and feature requests. You’ve got much better chances of getting your question answered on RocketChat.

Found an Issue or Bug?

If you find a bug in the source code, you can help us by submitting an issue to our [GitHub Repository][github]. Even better, you can submit a Pull Request with a fix.

Missing a Feature?

You can request a new feature by submitting an issue to our [GitHub Repository][github-issues].

If you would like to implement a new feature then consider what kind of change it is:

  • Major Changes that you wish to contribute to the project should be discussed first in an [GitHub issue][github-issues] that clearly outlines the changes and benefits of the feature.
  • Small Changes can directly be crafted and submitted to the [GitHub Repository][github] as a Pull Request. See the section about Pull Request Submission Guidelines, and for detailed information the [core development documentation][developers].

Want a Doc Fix?

Should you have a suggestion for the documentation, you can open an issue and outline the problem or improvement you have - however, creating the doc fix yourself is much better!

If you want to help improve the docs, it’s a good idea to let others know what you’re working on to minimize duplication of effort. Create a new issue (or comment on a related existing one) to let others know what you’re working on.

If you’re making a small change (typo, phrasing) don’t worry about filing an issue first. Use the friendly blue “Improve this doc” button at the top right of the doc page to fork the repository in-place and make a quick change on the fly. The commit message is preformatted to the right type and scope, so you only have to add the description.

For large fixes, please build and test the documentation before submitting the PR to be sure you haven’t accidentally introduced any layout or formatting issues. You should also make sure that your commit message follows the [Commit Message Guidelines][developers.commits].

Indices and tables