欢迎来到 Hyperledger Fabric

Hyperledger Fabric 是一个提供分布式账本解决方案的平台,该平台由一个模块化架构支撑,具有高度的保密性、可伸缩性、灵活性和可扩展性。 Hyperledger Fabric 的设计可以支持不同模块的可插拔实现,并且能够适应经济生态系统中错综复杂的场景。

Hyperleger Fabric 提供一个独特的可伸缩、可扩展的架构,这使它显著区别于其他区块链解决方案。 在为企业级区块链的未来做规划时,需要使其建立在一个具备完整审查机制、且开源的架构之上,Hyperledger Fabric 正是您的起点。

我们建议初学者先阅读:doc:`getting_started`章节,熟悉Hyperledger Fabric的组成部分和基本交易流程。 当您感觉可以继续的时候,再探索文库中的演示、技术说明、API等信息。

注解

如果您有其他该文档未谈及的疑问,或者在任何一个教程中遇到问题,请您访问 Still Have Questions? - 依然遇到问题? 页面了解关于额外帮助的温馨提示。

在您开始深入了解之前,请观看Hyperledger Fabric是如何为商业建立区块链。



入门指南

预备知识

安装 cURL

如果您还没有安装cURL工具,或者当您从文件中运行curl命令遇到错误提示时,请下载最新版本的`cURL <https://curl.haxx.se/download.html>`__ 工具。

注解

如果您使用的是Windows操作系统,请参照下文中关于 Windows 附加条件 的特定注解.

Docker 和 Docker Compose

请在您用于运行、开发、或其他方式使用Hyperledger Fabric的平台上安装以下内容:

  • MacOSX, *nix, 或者 Windows 10: Docker Docker 17.06.2-ce 或者更高版本。
  • 旧版 Windows: Docker Toolbox - 同样的, Docker 版本需要 Docker 17.06.2-ce 或者更高版本。

您可以在终端提示符中通过以下命令,检查您所安装的Docker版本:

docker --version

注解

在为Mac或者Windows安装Docker或者Docker Toolbox时,也会安装Docker Compose。 如果您已经安装了Docker,请务必检查是否同时安装了Docker Compose 1.14.0 或者更高版本。 如果没有,我们建议您安装更新版本的Docker。

您可以在终端提示符中通过以下命令,检查您所安装的Docker Compose版本:

docker-compose --version

Go 编程语言

Hyperledger Fabric 在很多组件中使用Go 1.9.x 版本的编程语言。

注解

不支持使用Go 1.8.x 版本。

  • Go - 1.9.x 版本

既然我们写的是Go链码(Chaincode)程序,我们需要确保源代码位于``$GOPATH``树目录中的某处。首先,您需要检查是否已经设置``$GOPATH``环境变量。

echo $GOPATH
/Users/xxx/go

当您echo $GOPATH``时,如果什么都没有显示,您需要设置这个环境变量。 通常来说,如果您的开发工作空间(development workspace)中有一个子树目录(directory tree child), 这个变量值就是这个目录,或者也可以是您$HOME目录下的子目录。由于我们将要用Go语言进行一系列编码, 您也许需要将如下内容添加至配置文件``~/.bashrc

export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

Node.js 运行环境和 NPM

如果您打算使用Hyperledger Fabric的Node.js SDK为Hyperledger Fabric开发应用程序,您需要安装Node.js 8.9.x版本。

注解

暂时不支持Node.js 9.x 版本。

注解

安装 Node.js 时同时需要安装 NPM, 但是,建议您先确认已经安装的NPM版本。您可以通过以下命令,升级 npm 工具:

npm install npm@5.6.0 -g
Python

注解

以下内容仅面向 Ubuntu 16.04 的用户。

默认情况下,Ubuntu 16.04 已经安装 Python 3.5.1 作为其 python3 的二进制版本。 但是,Fabric Node.js SDK 需要迭代 Python 2.7 版本,用于成功运行 ``npm install``命令, 建议通过以下命令,获取2.7版本:

sudo apt-get install python

请检查您的版本号:

python --version

Windows 附加条件

如果在Windows 7操作系统上做开发,您可以在 Docker Quickstart Terminal 中工作,它使用 Git Bash ,提供了一个除了内置Windows shell的更好替代。

然而,经验显示,这个开发环境功能比较局限,它适用于运行基于Docker的场景,比如:doc:getting_started, 但是当运行包含``make`` 和 ``docker``的命令时,您可能会遇到困难。

在Windows 10 操作系统上,您应该使用原生的Docker分发版本,您也可能需要使用 Windows PowerShell。 但是,为了让 下载特定平台的二进制文件 命令成功运行,您还是需要有可用的 uname 命令。 您可以通过其作为Git的一部分而得到它,但是需要注意的是,它只支持64-bit的版本。

在运行任何``git clone``命令之前,请运行以下命令:

git config --global core.autocrlf false
git config --global core.longpaths true

您可以通过以下命令,检查这些变量的设置:

git config --get core.autocrlf
git config --get core.longpaths

这些设置需要分别为 falsetrue

依附Git和Docker Toolbox的``curl`` 命令是旧版的,不能够正常处理 入门指南 中使用的重定向。 请确保您安装和使用`cURL 下载页面 <https://curl.haxx.se/download.html>`__中的较新版本。

对于 Node.js,您也需要必要的 Visual Studio C++ Build Tools,这是免费的工具,可以通过以下命令安装:

npm install --global windows-build-tools

请参照 NPM windows-build-tools 页面 获取更多细节。

当完成以上任务之后,您还需要通过以下命令,安装 NPM GRPC 模块:

npm install --global grpc

到此为止,您的环境应该已经准备好运行 入门指南 中的示例和教程。

注解

如果您有其他该文档未谈及的疑问,或者在任何一个教程中遇到问题,请您访问 Still Have Questions? - 依然遇到问题? 页面了解关于额外帮助的温馨提示。

Hyperledger Fabric 示例

注解

如果您运行在 Windows 操作系统上,接下来的终端命令行,可以使用 Docker Quickstart Terminal, 如果您之前没有安装,请访问 预备知识 页面。

如果您在 Windows 7 或者 macOS 操作系统上使用 Docker Toolbox, 需要在 C:\Users (Windows 7) 或者 /Users (macOS) 目录下安装和运行示例。

如果您使用 Mac 版 Docker,示例需要位于 /Users, /Volumes, /private, 或者 /tmp 目录下。
如果需要使用不同的位置,请查询Docker文档中的 文件共享

如果您使用 Windows 版 Docker,请查询Docker文档中的 共享驱动, 并使用使用其中一个共享驱动的位置。

在您的机器上选择一个存放 Hyperledger Fabric 示例应用程序仓库的位置,并在终端窗口中打开。然后,执行以下命令:

git clone -b master https://github.com/hyperledger/fabric-samples.git
cd fabric-samples
git checkout {TAG}

注解

为了确保示例和您接下来将要下载的 Fabric 二进制文件版本兼容,请检查与您 Fabric 版本匹配的示例 {TAG},比如, v1.1.0。 使用命令 “git tag” 可以查看所有 Fabric 示例标签的列表。

下载特定平台的二进制文件

接下来,我们将要安装 Hyperledger Fabric 特定平台的二进制文件。这个过程主要用于补充上述 Hyperledger Fabric 示例,但也可以被独立使用。 如果您不安装以上的示例,可以直接创建和输入目录,用于提取存放特定平台的二进制文件内容。

在您即将用于提取存放特定平台的二进制文件的目录下,请执行以下命令:

curl -sSL https://goo.gl/6wtTN5 | bash -s 1.1.0

注解

如果您在运行上述 curl 命令时遇到错误,您的 curl 版本可能太旧,不能处理重定向或者不能支持该环境。

请访问 预备知识 页面获取关于最新 curl 版本和正确环境的更多信息。或者,您可以用以下长地址了解其他信息: https://github.com/hyperledger/fabric/blob/master/scripts/bootstrap.sh

注解

对于任何一个已发布版本的 Hyperledger Fabric,您都可以使用以上命令,只需把 ‘1.1.0’ 替换成您需要安装版本的标识符。

通过上述命令,下载和执行一个bash脚本,该脚本可以下载和提取,所有您需要用于设置网络的特定平台的二进制文件,并把他们放入您之前创建的克隆仓库中, 其中,包括四个特定平台的二进制文件:

  • cryptogen,
  • configtxgen,
  • configtxlator,
  • peer
  • orderer
  • fabric-ca-client

他们会被放入您当前工作目录的 bin 子目录下。

您也许想要把这个目录加入到 PATH 环境变量中,这样可以在不完全符合每个二进制文件路径的情况下就找到他们,比如:

export PATH=<path to download location>/bin:$PATH

最后,这个脚本将会从 Docker Hub 下载 Hyperledger Fabric docker 镜像, 加入您的本地 Docker 注册表,并标记他们为 ‘latest’。

结束之后,这个脚本会罗列出所有已经安装的 Docker 镜像。

看看每一个镜像的名字,这些都将是组成我们的 Hyperledger Fabric 网络的组件。同时,您会发现您有两个同一镜像ID的实例 - 一个标记为 “x86_64-1.x.x”,另一个标记为 “latest”.

注解

在不同的架构上,x86_64 会被用于标识您所用架构的字符串替换。

注解

如果您有其他该文档未谈及的疑问,或者在任何一个教程中遇到问题,请您访问 Still Have Questions? - 依然遇到问题? 页面了解关于额外帮助的温馨提示。

安装前提

在我们开始前,请您按照:doc:`prereqs`文档里的要求,准备您即将用于开发区块链应用以及(或者) 运行Hyperledger Fabric的平台。

安装二进制文件和Docker镜像

在我们为Hyperledger Fabric二进制文件(Bineries)开发真实的安装包时, 我们为您的系统提供了一个可以 下载特定平台的二进制文件 的脚本。 这个脚本也会把Docker镜像(Images)下载到您的本地注册表(Local Registry)。

Hyperledger Fabric 示例

我们在教程中提供了一组示例应用程序,您可以在开始教程前,安装这些:doc:samples

API 文档

Hyperledger Fabric 的 Golang API 文档存放在 godoc 网站中的
Fabric.

如果您计划使用这些API做开发,可以即刻收藏这些链接。

Hyperledger Fabric SDKs

Hyperledger Fabric致力于为各种编程语言提供SDK,首先发布的两款SDK是面向 Node.js 和 Java, 我们希望在1.0.0版本发布之后,尽快提供面向Python和Go的SDK。

Hyperledger Fabric CA

Hyperledger Fabric 提供一个可选的`证书授权服务 <http://hyperledger-fabric-ca.readthedocs.io/en/latest>`_ 用于生成证书和密钥,以帮助您配置和管理区块链网络中的身份信息。但是,您也可以使用任何一个可以生成ECSDA证书的证书授权(CA)。

Key Concepts

Introduction

Hyperledger Fabric is a platform for distributed ledger solutions underpinned by a modular architecture delivering high degrees of confidentiality, resiliency, flexibility and scalability. It is designed to support pluggable implementations of different components and accommodate the complexity and intricacies that exist across the economic ecosystem.

Hyperledger Fabric delivers a uniquely elastic and extensible architecture, distinguishing it from alternative blockchain solutions. Planning for the future of enterprise blockchain requires building on top of a fully vetted, open-source architecture; Hyperledger Fabric is your starting point.

We recommended first-time users begin by going through the rest of the introduction below in order to gain familiarity with how blockchains work and with the specific features and components of Hyperledger Fabric.

Once comfortable – or if you’re already familiar with blockchain and Hyperledger Fabric – go to 入门指南 and from there explore the demos, technical specifications, APIs, etc.

What is a Blockchain?

A Distributed Ledger

At the heart of a blockchain network is a distributed ledger that records all the transactions that take place on the network.

A blockchain ledger is often described as decentralized because it is replicated across many network participants, each of whom collaborate in its maintenance. We’ll see that decentralization and collaboration are powerful attributes that mirror the way businesses exchange goods and services in the real world.

_images/basic_network.png

In addition to being decentralized and collaborative, the information recorded to a blockchain is append-only, using cryptographic techniques that guarantee that once a transaction has been added to the ledger it cannot be modified. This property of immutability makes it simple to determine the provenance of information because participants can be sure information has not been changed after the fact. It’s why blockchains are sometimes described as systems of proof.

Smart Contracts

To support the consistent update of information – and to enable a whole host of ledger functions (transacting, querying, etc) – a blockchain network uses smart contracts to provide controlled access to the ledger.

_images/Smart_Contract.png

Smart contracts are not only a key mechanism for encapsulating information and keeping it simple across the network, they can also be written to allow participants to execute certain aspects of transactions automatically.

A smart contract can, for example, be written to stipulate the cost of shipping an item that changes depending on when it arrives. With the terms agreed to by both parties and written to the ledger, the appropriate funds change hands automatically when the item is received.

Consensus

The process of keeping the ledger transactions synchronized across the network – to ensure that ledgers only update when transactions are approved by the appropriate participants, and that when ledgers do update, they update with the same transactions in the same order – is called consensus.

_images/consensus.png

We’ll learn a lot more about ledgers, smart contracts and consensus later. For now, it’s enough to think of a blockchain as a shared, replicated transaction system which is updated via smart contracts and kept consistently synchronized through a collaborative process called consensus.

Why is a Blockchain useful?

Today’s Systems of Record

The transactional networks of today are little more than slightly updated versions of networks that have existed since business records have been kept. The members of a Business Network transact with each other, but they maintain separate records of their transactions. And the things they’re transacting – whether it’s Flemish tapestries in the 16th century or the securities of today – must have their provenance established each time they’re sold to ensure that the business selling an item possesses a chain of title verifying their ownership of it.

What you’re left with is a business network that looks like this:

_images/current_network.png

Modern technology has taken this process from stone tablets and paper folders to hard drives and cloud platforms, but the underlying structure is the same. Unified systems for managing the identity of network participants do not exist, establishing provenance is so laborious it takes days to clear securities transactions (the world volume of which is numbered in the many trillions of dollars), contracts must be signed and executed manually, and every database in the system contains unique information and therefore represents a single point of failure.

It’s impossible with today’s fractured approach to information and process sharing to build a system of record that spans a business network, even though the needs of visibility and trust are clear.

The Blockchain Difference

What if instead of the rat’s nest of inefficiencies represented by the “modern” system of transactions, business networks had standard methods for establishing identity on the network, executing transactions, and storing data? What if establishing the provenance of an asset could be determined by looking through a list of transactions that, once written, cannot be changed, and can therefore be trusted?

That business network would look more like this:

_images/future_net.png

This is a blockchain network. Every participant in it has their own replicated copy of the ledger. In addition to ledger information being shared, the processes which update the ledger are also shared. Unlike today’s systems, where a participant’s private programs are used to update their private ledgers, a blockchain system has shared programs to update shared ledgers.

With the ability to coordinate their business network through a shared ledger, blockchain networks can reduce the time, cost, and risk associated with private information and processing while improving trust and visibility.

You now know what blockchain is and why it’s useful. There are a lot of other details that are important, but they all relate to these fundamental ideas of the sharing of information and processes.

What is Hyperledger Fabric?

The Linux Foundation founded Hyperledger in 2015 to advance cross-industry blockchain technologies. Rather than declaring a single blockchain standard, it encourages a collaborative approach to developing blockchain technologies via a community process, with intellectual property rights that encourage open development and the adoption of key standards over time.

Hyperledger Fabric is one of the blockchain projects within Hyperledger. Like other blockchain technologies, it has a ledger, uses smart contracts, and is a system by which participants manage their transactions.

Where Hyperledger Fabric breaks from some other blockchain systems is that it is private and permissioned. Rather than an open permissionless system that allows unknown identities to participate in the network (requiring protocols like Proof of Work to validate transactions and secure the network), the members of a Hyperledger Fabric network enroll through a Membership Service Provider (MSP).

Hyperledger Fabric also offers several pluggable options. Ledger data can be stored in multiple formats, consensus mechanisms can be switched in and out, and different MSPs are supported.

Hyperledger Fabric also offers the ability to create channels, allowing a group of participants to create a separate ledger of transactions. This is an especially important option for networks where some participants might be competitors and not want every transaction they make - a special price they’re offering to some participants and not others, for example - known to every participant. If two participants form a channel, then those participants – and no others – have copies of the ledger for that channel.

Shared Ledger

Hyperledger Fabric has a ledger subsystem comprising two components: the world state and the transaction log. Each participant has a copy of the ledger to every Hyperledger Fabric network they belong to.

The world state component describes the state of the ledger at a given point in time. It’s the database of the ledger. The transaction log component records all transactions which have resulted in the current value of the world state. It’s the update history for the world state. The ledger, then, is a combination of the world state database and the transaction log history.

The ledger has a replaceable data store for the world state. By default, this is a LevelDB key-value store database. The transaction log does not need to be pluggable. It simply records the before and after values of the ledger database being used by the blockchain network.

Smart Contracts

Hyperledger Fabric smart contracts are written in chaincode and are invoked by an application external to the blockchain when that application needs to interact with the ledger. In most cases chaincode only interacts with the database component of the ledger, the world state (querying it, for example), and not the transaction log.

Chaincode can be implemented in several programming languages. The currently supported chaincode language is Go with support for Java and other languages coming in future releases.

Privacy

Depending on the needs of a network, participants in a Business-to-Business (B2B) network might be extremely sensitive about how much information they share. For other networks, privacy will not be a top concern.

Hyperledger Fabric supports networks where privacy (using channels) is a key operational requirement as well as networks that are comparatively open.

Consensus

Transactions must be written to the ledger in the order in which they occur, even though they might be between different sets of participants within the network. For this to happen, the order of transactions must be established and a method for rejecting bad transactions that have been inserted into the ledger in error (or maliciously) must be put into place.

This is a thoroughly researched area of computer science, and there are many ways to achieve it, each with different trade-offs. For example, PBFT (Practical Byzantine Fault Tolerance) can provide a mechanism for file replicas to communicate with each other to keep each copy consistent, even in the event of corruption. Alternatively, in Bitcoin, ordering happens through a process called mining where competing computers race to solve a cryptographic puzzle which defines the order that all processes subsequently build upon.

Hyperledger Fabric has been designed to allow network starters to choose a consensus mechanism that best represents the relationships that exist between participants. As with privacy, there is a spectrum of needs; from networks that are highly structured in their relationships to those that are more peer-to-peer.

We’ll learn more about the Hyperledger Fabric consensus mechanisms, which currently include SOLO, Kafka, and will soon extend to SBFT (Simplified Byzantine Fault Tolerance), in another document.

Where can I learn more?

入门指南

We provide a number of tutorials where you’ll be introduced to most of the key components within a blockchain network, learn more about how they interact with each other, and then you’ll actually get the code and run some simple transactions against a running blockchain network. We also provide tutorials for those of you thinking of operating a blockchain network using Hyperledger Fabric.

超级账本 Fabric 模型

A deeper look at the components and concepts brought up in this introduction as well as a few others and describes how they work together in a sample transaction flow.

Hyperledger Fabric Functionalities-Hyperledger Fabric的功能

Hyperledger Fabric is an implementation of distributed ledger technology (DLT) that delivers enterprise-ready network security, scalability, confidentiality and performance, in a modular blockchain architecture. Hyperledger Fabric delivers the following blockchain network functionalities:

Hyperledger Fabric是分布式账本技术(DLT)的一种实现,它在模块化区块链架构中提供企业级网络安全性,可扩展性,保密性和性能。 Hyperledger Fabric提供以下区块链网络功能:

Identity management-身份管理

To enable permissioned networks, Hyperledger Fabric provides a membership identity service that manages user IDs and authenticates all participants on the network. Access control lists can be used to provide additional layers of permission through authorization of specific network operations. For example, a specific user ID could be permitted to invoke a chaincode application, but blocked from deploying new chaincode.

为了启用许可的网络,Hyperledger Fabric提供了一种成员身份识别服务,用于管理用户ID并对网络上的所有参与者进行身份验证。 访问控制列表可用于通过授权特定的网络操作来提供额外的权限层。 例如,特定的用户ID可以被允许调用链码应用程序,但被阻止配置新的链码。

Privacy and confidentiality-隐私和保密

Hyperledger Fabric enables competing business interests, and any groups that require private, confidential transactions, to coexist on the same permissioned network. Private channels are restricted messaging paths that can be used to provide transaction privacy and confidentiality for specific subsets of network members. All data, including transaction, member and channel information, on a channel are invisible and inaccessible to any network members not explicitly granted access to that channel.

Hyperledger Fabric使相互竞争的商业利益以及任何需要私密交易的群体能够在同一个许可的网络上共存。 私有“通道”是受限的消息传递路径,可用于为特定的网络成员子集提供交易隐私和机密性。 任何未明确授予访问该通道权限的网络成员,都不可见也无法访问该通道所有的数据,包括交易,成员及通道信息。

Efficient processing-高效的处理

Hyperledger Fabric assigns network roles by node type. To provide concurrency and parallelism to the network, transaction execution is separated from transaction ordering and commitment. Executing transactions prior to ordering them enables each peer node to process multiple transactions simultaneously. This concurrent execution increases processing efficiency on each peer and accelerates delivery of transactions to the ordering service.

Hyperledger Fabric按节点类型分配网络角色。 为了向网络提供并发性和并行性,交易执行与交易排序和提交是分开的。 在给它们排序之前执行交易可使每个对等节点同时处理多个交易。 这种并发执行提高了每个节点的处理效率并加速了向排序服务提供交易。

In addition to enabling parallel processing, the division of labor unburdens ordering nodes from the demands of transaction execution and ledger maintenance, while peer nodes are freed from ordering (consensus) workloads. This bifurcation of roles also limits the processing required for authorization and authentication; all peer nodes do not have to trust all ordering nodes, and vice versa, so processes on one can run independently of verification by the other.

除了支持并行处理之外,分工还可以从交易执行和账本维护的需求中解除排序节点的负担,同时节点从排序(共识)工作负载中解放出来。 角色分工也限制了授权和认证所需的处理; 所有对等节点不必信任所有的排序节点,反之亦然,因此一个节点上的进程可以独立于另一节点的验证运行。

Chaincode functionality-Chaincode的功能

Chaincode applications encode logic that is invoked by specific types of transactions on the channel. Chaincode that defines parameters for a change of asset ownership, for example, ensures that all transactions that transfer ownership are subject to the same rules and requirements. System chaincode is distinguished as chaincode that defines operating parameters for the entire channel. Lifecycle and configuration system chaincode defines the rules for the channel; endorsement and validation system chaincode defines the requirements for endorsing and validating transactions.

Chaincode应用程序对由通道上特定类型的交易调用的逻辑进行编码。 例如,为资产所有权变更定义参数的链码可确保所有变更所有权的交易都遵守相同的规则和要求。 “system chaincode”区别于Chaincode,它定义了整个通道的工作参数。 Lifecycle和配置system chaincode定义了通道的规则;背书和验证system chaincode定义了批准和验证交易的要求。

Modular design-模块化设计

Hyperledger Fabric implements a modular architecture to provide functional choice to network designers. Specific algorithms for identity, ordering (consensus) and encryption, for example, can be plugged in to any Hyperledger Fabric network. The result is a universal blockchain architecture that any industry or public domain can adopt, with the assurance that its networks will be interoperable across market, regulatory and geographic boundaries.

Hyperledger Fabric实现了模块化架构,为网络设计者提供了功能选择。 例如,用于身份识别,排序(共识)和加密的特定算法可以插入任何Hyperledger Fabric网络。 其结果是任何行业或公共领域都可以采用的通用区块链架构,并确保其网络能在市场,法规和地理边界中共同使用。

超级账本 Fabric 模型

本章节概述了 超级账本Fabric 的关键功能设计,这些关键的功能设计让超级账本Fabric成为一个全面并可定制的企业区块链解决方案。

  • Assets 资产 - 资产的定义能让任何拥有价值的事物通过网络进行交易。食品,古董车甚至货币期货也可以定义为资产。
  • Chaincode - 链码 链码 - 链码的执行和交易排序被隔离执行,以限制跨节点类型的信任和所需的验证级别,同时优化网络可扩展性和性能。
  • Ledger-Features 账本特性 - 不可篡改的共享账本纪录了每个通道加密后的交易历史。账本同时具备于SQL类似的查询功能,让审计和争议解决能有效进行。
  • Privacy-through-Channels 以通道保障隐私 - 通道的设计保护了多边交易的隐私和机密性,让竞争企业和受管制的行业在一个共同网络上交换资产。
  • Security-Membership-Services 安全的成员服务 - 授权制的会员资格提供了一个可信任的区块链网络,参与者知道所有交易都可以被授权的监管机构和审核员检测和追踪。
  • Consensus 共识机制 - 对共识机制独特的处理方式让超级账本Fabric拥有企业所需要的灵活性和可扩展性。

This section outlines the key design features woven into Hyperledger Fabric that fulfill its promise of a comprehensive, yet customizable, enterprise blockchain solution:

  • Assets - Asset definitions enable the exchange of almost anything with monetary value over the network, from whole foods to antique cars to currency futures.
  • Chaincode - 链码 - Chaincode execution is partitioned from transaction ordering, limiting the required levels of trust and verification across node types, and optimizing network scalability and performance.
  • Ledger-Features - The immutable, shared ledger encodes the entire transaction history for each channel, and includes SQL-like query capability for efficient auditing and dispute resolution.
  • Privacy-through-Channels - Channels enable multi-lateral transactions with the high degrees of privacy and confidentiality required by competing businesses and regulated industries that exchange assets on a common network.
  • Security-Membership-Services - Permissioned membership provides a trusted blockchain network, where participants know that all transactions can be detected and traced by authorized regulators and auditors.
  • Consensus - a unique approach to consensus enables the flexibility and scalability needed for the enterprise.

资产

资产可以为有形(房地产和硬件)或无形资产(合同和知识产权)。 超级账本Fabric提供了使用链码交易修改资产的能力。资产在 超级账本Fabric中 以键值对集合的形式表达,资产的状态更改作为交易记录在对应的通道账本里。资产可以用二进制和/或JSON格式表示和纪录。

你可以通过 Hyperledger Composer 这个工具,很容易地在超级账本Fabric里面定义和使用你的资产。

Assets can range from the tangible (real estate and hardware) to the intangible (contracts and intellectual property). Hyperledger Fabric provides the ability to modify assets using chaincode transactions.

Assets are represented in Hyperledger Fabric as a collection of key-value pairs, with state changes recorded as transactions on a Channel - 通道 ledger. Assets can be represented in binary and/or JSON form.

You can easily define and use assets in your Hyperledger Fabric applications using the Hyperledger Composer tool.

链码

链码是指包含了一项或多项资产定义,以及所有修改资产交易逻辑的软件。换句话说,链码代表了业务逻辑。 链码限制了被容许执行的读取和更改键值对/

库信息的规则。 链码函数使用当前的stateDB里的数据执行,并通过超级账本Fabric的交易协议启动。 链码执行后会产生一组键值对(写入集),这组键值对会被提交到网络并写入所有Peer节点的账本里。

Chaincode is software defining an asset or assets, and the transaction instructions for modifying the asset(s). In other words, it’s the business logic. Chaincode enforces the rules for reading or altering key value pairs or other state database information. Chaincode functions execute against the ledger’s current state database and are initiated through a transaction proposal. Chaincode execution results in a set of key value writes (write set) that can be submitted to the network and applied to the ledger on all peers.

账本特性

Fabric账本是所有资产状态数据修改的纪录,账本上的数据是已排序并且防篡改的。状态数据修改是用户调用链码(交易)的直接结果。每个交易都会生成一个资产键值对,这个键值对会成为一个增加,修改或删除的纪录提交到账本里。账本是以区块链(链)的数据结构,把排序并不可篡改的数据纪录到每个区块里,同时以stateDB纪录fabric的当前数据状态。每一个通道有一个独立账本,每个Peer节点都会为自己参与的通道维护和备份该通道的账本。

The ledger is the sequenced, tamper-resistant record of all state transitions in the fabric. State transitions are a result of chaincode invocations (‘transactions’) submitted by participating parties. Each transaction results in a set of asset key-value pairs that are committed to the ledger as creates, updates, or deletes.

The ledger is comprised of a blockchain (‘chain’) to store the immutable, sequenced record in blocks, as well as a state database to maintain current fabric state. There is one ledger per channel. Each peer maintains a copy of the ledger for each channel of which they are a member.

  • 以主键值,键值区间和复合主键查询和更新账本
  • 以丰富查询语言执行只读查询(使用CouchDB作为stateDB的情况下)
  • 交易的内容包含所有链码已读取的键值对版本(读取集)和所有写入的键值对(写入集)
  • 交易包含所有背书节点的加密签名并以提交到排序服务(ordering service)
  • 交易被order节点排序,并由排序服务广播到对应通道的Peer节点
  • Peer 节点根据背书政策验证交易,并执行背书政策
  • 在交易加入区块前,Peer 节点会教验状态数据版本是否在链码执行后有更新,确保交易结果的有效性。
  • 一旦交易成功验证并提交到账本后,交易数据就不可篡改
  • 每个通道账本都包含一个设定区块,这个设定区块定义了政策,访问权限清单和其他相关信息
  • 通道的成员服务(MSP)实例让每个通道可以从不同的证书颁发机构获得加密算法的资料

想了解更多关于账本数据库,存储结构和查询功能的信息,请参考 Ledger - 账本 文档。

  • Query and update ledger using key-based lookups, range queries, and composite key queries
  • Read-only queries using a rich query language (if using CouchDB as state database)
  • Read-only history queries - Query ledger history for a key, enabling data provenance scenarios
  • Transactions consist of the versions of keys/values that were read in chaincode (read set) and keys/values that were written in chaincode (write set)
  • Transactions contain signatures of every endorsing peer and are submitted to ordering service
  • Transactions are ordered into blocks and are “delivered” from an ordering service to peers on a channel
  • Peers validate transactions against endorsement policies and enforce the policies
  • Prior to appending a block, a versioning check is performed to ensure that states for assets that were read have not changed since chaincode execution time
  • There is immutability once a transaction is validated and committed
  • A channel’s ledger contains a configuration block defining policies, access control lists, and other pertinent information
  • Channel’s contain Membership Service Provider - 成员服务提供者 instances allowing for crypto materials to be derived from different certificate authorities

See the Ledger - 账本 topic for a deeper dive on the databases, storage structure, and “query-ability.”

以通道保障隐私

超级账本Fabric在每个通道的基础上使用不可篡改的账本以及可以操纵和修改资产当前状态(即更新键值对)的链码。账本只存在于一个通道范围内,它可以在整个网络中共享(假设每个参与者都在一个共同通道上运营)或者可以将其私有化,只包含一组特定的参与者。在后一种情况下,这些参与者将创建一个单独的通道,从而隔离这个通道的交易和账本。为了缩小总体透明度和隐私之间的差距,链码只能安装在需要访问资产状态以执行读取和写入的Peer节点(换句话说,如果链接代码未安装在Peer节点上,它将无法正确地与账本连接)。为了进一步保护数据,链码可以在将交易发送到排序服务(ordering service)并将区块附加到分类账之前,使用常用的加密算法(如AES)对链码中的值进行加密(部分或全部)。一旦将加密数据写入分类帐,只能由拥有对应密钥的用户解密。

更多关于链码加密的信息,请参考 Chaincode for Developers 面向开发人员的链码指南 文档。

Hyperledger Fabric employs an immutable ledger on a per-channel basis, as well as chaincodes that can manipulate and modify the current state of assets (i.e. update key value pairs). A ledger exists in the scope of a channel - it can be shared across the entire network (assuming every participant is operating on one common channel) - or it can be privatized to only include a specific set of participants.

In the latter scenario, these participants would create a separate channel and thereby isolate/segregate their transactions and ledger. In order to solve scenarios that want to bridge the gap between total transparency and privacy, chaincode can be installed only on peers that need to access the asset states to perform reads and writes (in other words, if a chaincode is not installed on a peer, it will not be able to properly interface with the ledger).

To further obfuscate the data, values within chaincode can be encrypted (in part or in total) using common cryptographic algorithms such as AES before sending transactions to the ordering service and appending blocks to the ledger. Once encrypted data has been written to the ledger, it can only be decrypted by a user in possession of the corresponding key that was used to generate the cipher text. For further details on chaincode encryption, see the Chaincode for Developers 面向开发人员的链码指南 topic.

安全的成员服务

超级账本 Fabric 支持一个由已知身份的参与者组成的交易网络。公钥基础建设用于生成与组织,网络成员,用户或客户端的加密证书。数据访问权限因此可以在更广泛的网络和通道级别上进行操纵和管理。 超级账本 Fabric的这种 “授权” 概念,再加上通道的功能,有助于解决隐私和机密性成为首要考量的使用场景。

关于超级账本Fabric的加密功能实现,加密签名,认证和授权的操作,请参考 Membership Service Providers (MSP) 文档。

Hyperledger Fabric underpins a transactional network where all participants have known identities. Public Key Infrastructure is used to generate cryptographic certificates which are tied to organizations, network components, and end users or client applications. As a result, data access control can be manipulated and governed on the broader network and on channel levels. This “permissioned” notion of Hyperledger Fabric, coupled with the existence and capabilities of channels, helps address scenarios where privacy and confidentiality are paramount concerns.

See the Membership Service Providers (MSP) topic to better understand cryptographic implementations, and the sign, verify, authenticate approach used in Hyperledger Fabric.

共识机制

在分布式账本技术的讨论中,共识机制最近已成为特定算法的同义词,然而共识不仅仅是简单地就交易顺序达成一致。超级账本 Fabric通过其在整个交易流程中的基本角色(从提案和背书,到排序,确认和提交)突出了这种对共识机制理解的差异。简而言之,超级账本Fabric里的共识机制定义为对区块里的交易组正确性的全面验证。

当区块内交易顺序和结果通过政策标准检查时,这个区块的内的数据就能达成共识。这些检查发生在交易的生命周期中,包括使用背书政策来规定哪些特定成员必须认可那些指定的交易类别。这些交易检查还会使用链码以确保策略得到执行和维护。在发布修改之前,Peer节点将使用链码来确保有有效的背书,并且这些背书来源于适当的实体。此外,在包含交易的任何块被提交到账本之前,Peer 节点将进行版本检查,以确认账本的当前状态已获得共识并没有更新。此最终检查可防止双重支出操作以及可能危及数据完整性的其他威胁,并允许针对非静态变量执行功能。

除了背书操作,有效性和版本检查之外,交易流程中还进行大量的身份验证。访问权限控制列表在网络层上实施(由排序服务到通道)。在交易流程中,交易建议在通过不同的架构组件时会被重复地签名和验证。总而言之,共识机制并不仅仅局限于一批交易的共识顺序,一个有效交易在Fabric机制中,通过提案到提交之间的持续核查过程后,共识是一个必然生成的副产品。

请参考可视化的交易流程 Transaction Flow,以了解更多关于共识机制的内容。

In distributed ledger technology, consensus has recently become synonymous with a specific algorithm, within a single function. However, consensus encompasses more than simply agreeing upon the order of transactions, and this differentiation is highlighted in Hyperledger Fabric through its fundamental role in the entire transaction flow, from proposal and endorsement, to ordering, validation and commitment. In a nutshell, consensus is defined as the full-circle verification of the correctness of a set of transactions comprising a block.

Consensus is ultimately achieved when the order and results of a block’s transactions have met the explicit policy criteria checks. These checks and balances take place during the lifecycle of a transaction, and include the usage of endorsement policies to dictate which specific members must endorse a certain transaction class, as well as system chaincodes to ensure that these policies are enforced and upheld. Prior to commitment, the peers will employ these system chaincodes to make sure that enough endorsements are present, and that they were derived from the appropriate entities. Moreover, a versioning check will take place during which the current state of the ledger is agreed or consented upon, before any blocks containing transactions are appended to the ledger. This final check provides protection against double spend operations and other threats that might compromise data integrity, and allows for functions to be executed against non-static variables.

In addition to the multitude of endorsement, validity and versioning checks that take place, there are also ongoing identity verifications happening in all directions of the transaction flow. Access control lists are implemented on hierarchal layers of the network (ordering service down to channels), and payloads are repeatedly signed, verified and authenticated as a transaction proposal passes through the different architectural components. To conclude, consensus is not merely limited to the agreed upon order of a batch of transactions, but rather, it is an overarching characterization that is achieved as a byproduct of the ongoing verifications that take place during a transaction’s journey from proposal to commitment.

Check out the Transaction Flow diagram for a visual representation of consensus.

身份

Identity

什么是身份

What is an Identity?

在区块链的网络中有很多不同的身份包括节点(对等节点)、排序节点、客户端应用,超级管理员等等。它们中的每一个角色都拥有X.509数字证书封装的身份。而这些身份也因为决定着角色在区块链网络中所能获得的资源权限格外重要。 超级账本Fabric使用角色身份中的某些属性来确定权限,并称之为当事人(principal),当事人就像是userIDs或者groupIDs这些,但是要更加灵活,它们可以包含角色的各种身份属性。当我们讨论当事人时,我们是在考虑系统中的角色,特别是可以决定它们权限的身份属性。这些属性通常是一个成员的组织,所属部门,承担角色或者其它成员的特殊身份。 The different actors in a blockchain network include peers, orderers, client applications, administrators and more. Each of these actors has an identity that is encapsulated in an X.509 digital certificate. These identities really matter because they determine the exact permissions over resources that actors have in a blockchain network. Hyperledger Fabric uses certain properties in an actor’s identity to determine permissions, and it gives them a special name – a principal. Principals are just like userIDs or groupIDs, but a little more flexible because they can include a wide range of an actor’s identity properties. When we talk about principals, we’re thinking about the actors in the system – specifically the actor’s identity properties which determine their permissions. These properties are typically the actor’s organization, organizational unit, role or even the actor’s specific identity.

更为重要的是,一个身份必须是可证实的(即真实身份),因此它必须来源于一个能被系统信任的权威组织。在Fabric中成员服务提供者(MSP,membership service provider)就是用来完成这件事情的。进一步来说,MSP是表示组织成员资格的组件,因此,它定义了管理该组织成员有效身份的规则。Fabric中默认的MSP方案使用X.509证书作为身份,并采用传统的公钥基础设施(PKI)分层模型。 Most importantly, an identity must be verifiable (a real identity, in other words), and for this reason it must come from an authority trusted by the system. A membership service provider (MSP) is the means to achieve this in Hyperledger Fabric. More specifically, an MSP is a component that represents the membership rules of an organization, and as such, it that defines the rules that govern a valid identity of a member of this organization. The default MSP implementation in Fabric uses X.509 certificates as identities, adopting a traditional Public Key Infrastructure (PKI) hierarchical model.

使用身份的简单场景

A Simple Scenario to Explain The Use of an Identity

想象这样一个场景:你到超市去买东西,在收银台你看到一个提醒说,只能使用Visa、Mastercard和AMEX银行卡,如果你试着使用其它卡片支付,让我们称它为“幻想卡”——即便这张卡真实且余额充足,也不行,它不会被接受。 Scenario Imagine that you visit a supermarket to buy some groceries. At the checkout you see a sign that says that only Visa, Mastercard and AMEX cards are accepted. If you try to pay with a different card – let’s call it an “ImagineCard” – it doesn’t matter whether the card is authentic and you have sufficient funds in your account. It will be not be accepted.

只拥有一个有效的信用卡还不够,还必须是商店可以接受的支付卡片才可以!PKIs和MSPs以相同的方式协同工作——PKI提供有效身份列表,MSP描述哪些身份是网络中给定组织的成员。 Having a valid credit card is not enough – it must also be accepted by the store! PKIs and MSPs work together in the same way – PKI provides a list of identities, and an MSP says which of these are members of a given organization that participates in the network. PKI证书颁发机构和MSPs提供了类似的多功能组合,一个PKI就像是一种卡的提供商,它分配了很多不同的可验证身份,而一个MSP,从某个层面看,就像是商店接受的卡种提供商列表。决定着哪些身份在商店的支付网络中是可信的成员(角色)。MSP将可验证的身份转换成了区块链网络中的成员。 PKI certificate authorities and MSPs provide a similar combination of functionalities. A PKI is like a card provider – it dispenses many different types of verifiable identities. An MSP, on the other hand, is like the list of card providers accepted by the store – determining which identities are the trusted members (actors) of the store payment network. MSPs turn verifiable identities into the members of a blockchain network. 下面让我们更详细的了解这些概念。 Let’s drill into these concepts in a little more detail.

什么事PKIs

What are PKIs?

一个公钥基础设施(PKI)是在网络中提供安全通信的因特网技术的集合。也正是PKI把S带给HTTPS的,如果你正在网络浏览器中读这篇文档,那么你很可能就在使用PKI去确认文档来源于一个可以验证的资源。

PKI

A public key infrastructure (PKI) is a collection of internet technologies that provides secure communications in a network. It’s PKI that puts the S in HTTPS – and if you’re reading this documentation on a web browser, you’re probably using a PKI to make sure it comes from a verified source.

公钥基础设施(PKI)原理。公钥基础设施由向成员(服务的使用者、服务的提供者)颁发数字证书的认证机构(CA)组成,然后这些成员使用数字证书去证明他们在环境中交换的消息确实由该成员所发,一个CA的证书撤销列表(CRL)由无效数字证书的引用组成。证书的吊销可能有很多原因,比如,与证书关联的私有信息(私钥等)已经泄漏,那么证书就应该被吊销。 *The elements of Public Key Infrastructure (PKI). A PKI is comprised of Certificate Authorities who issue digital certificates to parties (e.g., users of a service, service provider), who then use them to authenticate themselves in the messages they exchange with their environment. A CA’s Certificate Revocation List (CRL) constitutes a reference for the certificates that are no longer valid. Revocation of a certificate can happen fora number of reasons. For example, a certificate may be revoked because the cryptographic private material associated to the certificate has been exposed.

虽然区块链网络不只是通信网络,它依赖于PKI标准来确保在各种多样的网络间加密通信,同时确保发布在区块链上的消息都是被正确认证的。因此理解PKI的基础概念十分重要,进而MSPs也很重要。 Although a blockchain network is more than a communications network, it relies on the PKI standard to ensure secure communication between various network participants, and to ensure that messages posted on the blockchain are properly authenticated. It’s therefore really important to understand the basics of PKI and then why MSPs are so important.

PKI有四个关键的元素:

  • 数字证书
  • 公钥、私钥
  • 认证机构
  • 证书吊销列表 There are four key elements to PKI:
  • Digital Certificates
  • Public and Private Keys
  • Certificate Authorities
  • Certificate Revocation Lists

让我们快速介绍一下这些PKI的基本要素,如果你还想知道更多细节,Wikipedia是个开始的好地方。 Let’s quickly describe these PKI basics, and if you want to know more details, Wikipedia is a good place to start.

数字证书

Digital Certificates

数字证书是一份保存了成员相关的一系列属性的文件。最通用的数字证书类型满足X.509 标准,该标准允许在其结构中编码成员的身份识别细节。举例来说,张三(原文是John Doe,英语中意为某人)是美国密西根省底特律市FOO公司会计部的员工,他可能有这样的数字证书:主题属性C=美国, ST=密西根,L=底特律,O=FOO公司,OU=会计部,CN=张三,UID=123456.张三的数字证书和他的官方身份卡是相似的,它提供张三的信息,张三可以使用这些信息来证明关于他的一些关键事实,在X.509中还有很多其它的属性,但目前我们只关注这些就够了。 A digital certificate is a document which holds a set of attributes relating to a party. The most common type of certificate is the one compliant with the X.509 standard, which allows the encoding of a party’s identifying details in its structure. For example, John Doe of Accounting division in FOO Corporation in Detroit, Michigan might have a digital certificate with a SUBJECT attribute of C=US, ST=Michigan, L=Detroit, O=FOO Corporation, OU=Accounting, CN=John Doe /UID=123456. John’s certificate is similar to his government identity card – it provides information about John which he can use to prove key facts about him. There are many other attributes in an X.509 certificate, but let’s concentrate on just these for now.

DigitalCertificate

关于某个成员的数字证书,某人是证书的“主题”,“主题”中加粗的文本显示了关于他的关键事实。如您所见,该证书还保存了更多的信息。最重要的是,此人的公钥信息也发布在他的证书中,而他的私钥不在其中,私钥是必须保密的。 A digital certificate describing a party called John Doe. John is the SUBJECT of the certificate, and the highlighted SUBJECT text shows key facts about John. The certificate also holds many more pieces of information, as you can see. Most importantly, John’s public key is distributed within his certificate, whereas his private signing key is not. This signing key must be kept private.

重要的是张三的所有属性都可以使用一种称为密码学的数学技术来记录(书面语,“密写”)也因为这样,对属性的篡改会使证书失效。密码学使得张三可以向他人出示数字证书来证明身份,只要其它人能够信任证书颁发者,也就是证书颁发机构(CA)。只要CA可以隐秘的保存确定的密文信息(意思是,它自己的私有签名密钥),任何读取证书的人都可以确信,这些信息就是张三的,它没有被篡改。可以把上图里Mary的X.509证书看作是一个不可更改的数字身份证。 What is important is that all of John’s attributes can be recorded using a mathematical technique called cryptography (literally, “secret writing”) so that tampering will invalidate the certificate. Cryptography allows John to present his certificate to others to prove his identity so long as the other party trusts the certificate issuer, known as a Certificate Authority (CA). As long as the CA keeps certain cryptographic information securely (meaning, its own private signing key), anyone reading the certificate can be sure that the information about John has not been tampered with – it will always have those particular attributes for John Doe. Think of Mary’s X.509 certificate as a digital identity card that is impossible to change.

身份验证 & 公钥和私钥

Authentication & Public keys and Private Keys

身份验证和消息完整性是安全通信的重要概念。身份验证要求各方创建特定消息来确保消息交换者的身份。完整性要求在传输过程中不修改消息。例如,您可能希望确保与真正的张三通信而不是模仿者。或者可能希望确保他向您发送的一条消息,在传输过程中没有被任何人篡改过。 Authentication and message integrity are important concepts of secure communication. Authentication requires that parties who exchange messages can be assured of the identity that created a specific message. Integrity requires that the message was not modified during its transmission. For example, you might want to be sure you’re communicating with the real John Doe than an impersonator. Or if John has sent you a message, you might want to be sure that it hasn’t been tampered with by anyone else during transmission.

传统的身份验证机制依赖于数字签名机制,顾名思义,允许一方对其消息进行数字签名。数字签名还可以保证签名消息的完整性。 Traditional authentication mechanisms rely on digital signature mechanisms, that as the name suggests, allow a party to digitally sign its messages. Digital signatures also provide guarantees on the integrity of the signed message.

从技术上讲,数字签名机制要求每一方都需要保存两个加密有关联的密钥:广泛公开使用的公钥,充当身份验证依据,以及用于在消息上生成数字签名的私钥。 消息的接收者可以通过检查消息的附加数字签名,是否在预期发送者的公钥下有效,来验证接收到的消息来源和完整性。 Technically speaking, digital signature mechanisms require require for each party to hold two cryptographically connected keys: a public key that is made widely available, and acts as authentication anchor, and a private key that is used to produce digital signatures on messages. Recipients of digitally signed messages can verify the origin and integrity of a received message by checking that the attached signature is valid under the public key of the expected sender.

私钥和对应公钥的独特关系是密码学的魔术,这使得加密通信成为可能。密钥之间的唯一数学关系让私钥可以用于消息签名,这一签名仅有对应的公钥和相同的签名信息可以正确匹配。 The unique relationship between a private key and the respective public key is the cryptographic magic that makes secure communications possible. The unique mathematical relationship between the keys is such that the private key can be used to produce a signature on a message that only the corresponding public key can match, and only on the same message.

验证签名

在上面的示例中,为了验证他的消息,Mary使用她的私钥在消息上生成签名,然后将其附加到消息中,任何使用Mary的公钥查看签名消息的人都可以验证签名真伪。 In the example above, to authenticate his message Joe uses his private key to produce a signature on the message, which he then attaches to the message. The signature can be verified by anyone who sees the signed message, using John’s public key.

认证机构

Certificate Authorities

如您所见,成员或者说节点能够通过区块链网络所信任的机构为其发布的数字身份加入到网络中。在最常见的情况下,数字身份(或简称身份)具有符合X.509标准的加密验证数字证书的形式,并由证书颁发机构(CA)颁发。 As you’ve seen, an actor or a node is able to participate in the blockchain network, via the means of a digital identity issued for it by an authority trusted by the system. In the most common case, digital identities (or simply identities) have the form of cryptographically validated digital certificates that comply with X.509 standard, and are issued by a Certificate Authority (CA).

CA是互联网安全协议的常见部分,您可能已经听说过一些比较流行的协议:Symantec(最初是Verisign),GeoTrust,DigiCert,GoDaddy和Comodo等。 CAs are a common part of internet security protocols, and you’ve probably heard of some of the more popular ones: Symantec (originally Verisign), GeoTrust, DigiCert, GoDaddy, and Comodo, among others.

证书颁发机构

证书颁发机构向不同的参与者分发证书。 这些证书由CA进行数字签名(即使用CA的私钥),同时将实际的用户角色与该用户的公钥绑定在一起,并可选地与一个完整的属性列表绑定在一起。显然,如果一个人信任CA(并且知道其公钥),它可以(通过验证参与者证书上的CA签名)信任特定参与者绑定到证书中包含的公钥,并拥有包含的属性。 *A Certificate Authority dispenses certificates to different actors. These certificates are digitally signed by the CA (i.e, using the CA’s private key), and bind together the actual actor with the actor’s public key, and optionally with a comprehensive list of properties. Clearly, if one trust the CA (and knows its public key), it can (by validating the CA’s signature on the actor’s certificate) trust that the specific actor is bound to the public key included in the certificate, and owns the included attributes.

至关重要的,证书因为并不包含角色信息和角色的私钥,因此可以广泛传播,这也就使得证书能够当做信任的锚点,来验证来自不同角色的消息。 Crucially certificates can be widely disseminated, as they do not include neither the actors’ nor the actual CA’s private keys. As such they can be used as anchor of trusts for authenticating messages coming from different actors.

事实上,证书颁发机构本身也有证书,它们被广泛使用。这允许由CA颁发身份的使用者们通过检查证书,来验证只能由相应私钥的持有者生成。 In reality, CAs themselves also have a certificate, which they make widely available. This allows the consumers of identities issued by a given CA to verify them by checking that the certificate could only have been generated by the holder of the corresponding private key (the CA).

在区块链的设置中,每个希望与网络进行交互的参与者都需要有身份。在这样的设置下,可以说有一个或多个CA可以用于从数字角度定义组织的成员。 CA为组织的参与者提供可验证的数字身份的基础。 In the Blockchain setting, every actor who wishes to interact with the network needs an identity. In this setting, you might say that one or more CAs can be used to define the members of an organization’s from a digital perspective. It’s the CA that provides the basis for an organization’s actors to have a verifiable digital identity.

根CA,中间CA和信任链
Root CAs, Intermediate CAs and Chains of Trust

CA有两种形式:根CA和中间CA。由于根CA(赛门铁克、Geotrust等等)必须安全地向互联网用户颁发数以亿计个证书,因此将此过程分散到所谓的中间CA是有意义的。这些中间CA具有由根CA或其他中间机构颁发的证书,允许为链中的任何CA颁发的任何证书建立“信任链”。追溯到根CA的这种能力不仅提供了CA的功能扩展,同时依然提供安全性,这样的特点让组织能够有信心地使用中间CA所颁发的证书。此外它还限制了根CA的暴露,根CA如果受到损害,会危及整个信任链。而另一方面,如果是中间CA受到损害,则曝露范围会小得多。

信任链

CAs come in two flavors: Root CAs and Intermediate CAs. Because Root CAs (Symantec, Geotrust, etc) have to securely distribute hundreds of millions of certificates to internet users, it makes sense to spread this process out across what are called Intermediate CAs. These Intermediate CAs have their certificates issued by the root CA or another intermediate authority, allowing the establishment of a “chain of trust” for any certificate that is issued by any CA in the chain. This ability to track back to the Root CA not only allows the function of CAs to scale while still providing security – allowing organizations that consume certificates to use Intermediate CAs with confidence – it limits the exposure of the Root CA, which, if compromised, would endanger the entire chain of trust. If an Intermediate CA is compromised, on the other hand, there is a much smaller exposure.

只要中间CA的证书全是由根CA本身颁发或具有到根CA的信任链,那么就可以在根CA和这样的一组中间CA之间建立信任链。

A chain of trust is established between a Root CA and a set of Intermediate CAs as long as the issuing CA for the certificate of each of these Intermediate CAs is either the Root CA itself or has a chain of trust to the Root CA.

中间CA在跨多个组织颁发证书时提供了巨大的灵活性,这在有许可的区块链系统中非常有用。例如,您将看到不同的组织可能使用不同的根CA,或者使用具有不同中间CA的相同根CA,这些取决于网络的需求。 Intermediate CAs provide a huge amount of flexibility when it comes to the issuance of certificates across multiple organizations, and that’s very helpful in a permissioned blockchain system. For example, you’ll see that different organizations may use different Root CAs, or the same Root CA with different Intermediate CAs – it really does depend on the needs of the network.

Fabric CA

因为CA确实太重要了,Fabric提供了内置的CA组件供你在构建区块链网络的时候创建CA。这个组件(** fabric-ca )是一个私有根CA的提供者程序,能够管理具有X.509证书形式的Fabric参与者的数字身份。由于Fabric-CA是针对Fabric的根CA需求而自定义的特别CA,因此它本身无法为浏览器中的常规或自动使用,提供SSL证书。 但是,因为必须使用某些** CA来管理身份(即使在测试环境中),fabric-ca可以用来提供和管理证书。使用公共/商业根或中间CA来提供识别也是可以的,并且完全合适。 It’s because CAs are so important that Fabric provides a built-in CA component to allow you to create CAs in the blockchain networks you form. This component – known as fabric-ca is a private root CA provider capable of managing digital identities of Fabric participants that have the form of X.509 certificates. Because Fabric-CA is a custom CA targetting the Root CA needs of Fabric, it is inherently not capable of providing SSL certificates for general/automatic use in browsers. However, because some CA must be used to manage identity (even in a test environment), fabric-ca can be used to provide and manage certificates. It is also possible – and fully appropriate – to use a public/commerical root or intermediate CA to provide identification.

如果你对这些感兴趣,你可以从CA文档章节中阅读到更多关于fabric-ca的内容。

If you’re interested, you can read a lot more about fabric-ca

证书废弃列表

Certificate Revocation Lists

证书废弃列表(CRL)很容易理解——它是一个关于CA已知的因为某种原因而弃用的证书列表,如果你能回忆起商店的那个场景,证书废弃列表就像是被盗信用卡列表一样。 A Certificate Revocation List (CRL) is easy to understand – it’s just a list of references to certificates that a CA knows to be revoked for one reason or another. If you recall the store scenario, a CRL would be like a list of stolen credit cards.

当一个第三方想要验证另一方的身份时,它会先检查证书颁发机构的CRL,来确认该身份证书没有被吊销。验证者不是必须检查CRL,如果他们可以冒着接受来自受损身份的风险。

CRL

When a third party wants to verify another party’s identity, it first checks the issuing CA’s CRL to make sure that the certificate has not been revoked. A verifier doesn’t have to check the CRL, but if they don’t they run the risk of accepting a compromised identity.

使用CRL检查证书是否仍然有效。如果模仿者试图将受损的数字证书传递给验证方,则可以首先检查颁发CA的CRL,以确保其未被列为不再有效。

Using a CRL to check that a certificate is still valid. If an impersonator tries to pass a compromised digital certificate to a validating party, it can be first checked against the issuing CA’s CRL to make sure it’s not listed as no longer valid.

请注意,被撤销的证书与证书过期非常不同。撤销的证书尚未过期 - 按其他措施,它们是完全有效的证书。 这类似于过期驾驶执照和被吊销驾驶执照之间的差异。有关CRL的更深入信息,请点击这里

Note that a certificate being revoked is very different from a certificate expiring. Revoked certificates have not expired – they are, by every other measure, a fully valid certificate. This is similar to the difference between an expired driver’s license and a revoked driver’s license. For more in depth information into CRLs, click here.

您已经了解了PKI如何通过信任链提供可验证的身份,下一步是了解这些身份如何用于代表区块链网络的可信成员。这就是成员服务提供者(MSP)发挥作用的地方——它确定了区块链网络中特定组织成员的各方。 Now that you’ve seen how a PKI can provide verifiable identities through a chain of trust, the next step is to see how these identities can be used to represent the trusted members of a blockchain network. That’s where a Membership Service Provider (MSP) comes into play – it identifies the parties who are the members of a given organization in the blockchain network.

想要学习更多关于成员的内容,可以查阅MSPs的概念文档。 To learn more about membership, check out the conceptual documentation on MSPs.

Membership

??Ա???

If you’ve read through the documentation on Identity you’ve seen how a PKI can provide verifiable identities through a chain of trust. Now let’s see how these identities can be used to represent the trusted members of a blockchain network.

??????Ѿ??Ķ????й???????ĵ?????ô???Ѿ??˽???PKI???ͨ?????????ṩ????֤????ݡ??????????ǿ?????Щ?????????ڱ?ʾ??????????Ŀ??ų?Ա??

This is where a Membership Service Provider (MSP) comes into play – it identifies which Root CAs and Intermediate CAs are trusted to define the members of a trust domain, e.g., an organization, either by listing the identities of their members, or by identifying which CAs are authorized to issue valid identities for their members, or – as will usually be the case – through a combination of both.

???? ??Ա???? ?ṩ?̣?MSP?????????õĵط? - ??ͨ???г???Ա????ݻ???ͨ??ʶ????ЩCA????ȨΪ???Ա??????Ч????????? - ͨ??????? - ͨ?????ߵ???ϣ?????????Щ??CA???м?CA?DZ????ε??ǿ?????????????????????֯???ij?Ա?ġ?

The power of an MSP goes beyond simply listing who is a network participant or member of a channel. An MSP can identify specific roles an actor might play either within the range or the organization (trust domain) the MSP represents (e.g., MSP admins, members of an organization subdivision), and sets the basis for defining access privileges in the the context of a network and channel (e.g., channel admins, readers, writers). The configuration of an MSP is advertised to all the channels, where members of the corresponding organization participate (in the form of a channel MSP). Peers, orderers and clients also maintain a local MSP instance (also known as lLocal MSP) to authenticate messages of members of their organization outside the context of a channel. In addition, an MSP can allow for the identification of a list of identities that have been revoked (we discussed this in the Identity documentation but will talk about how that process also extends to an MSP).

MSP??ǿ???ܲ????????г?˭??????????߻????ŵ??ģ?channel????Ա??MSP???Զ???????߿?????MSP??????ķ?Χ????֯?????????ڰ??ݵ??ض???ɫ?????磬MSP????Ա????֯??֧?ij?Ա??????Ϊ????????ŵ??ı????£????磬?ŵ?????Ա???Ķ??ߣ?д???ߣ????????Ȩ?޵춨?˻?????MSP?????ñ?ͨ????????ŵ?????????Ӧ??֯?ij?Ա????һ?? ?ŵ?MSP ????ʽ?????롣?ڵ㣬????ڵ?Ϳͻ??˻?ά??һ??????MSPʵ????Ҳ??Ϊ lLocal MSP ?????????ŵ?????֮????֤????֯??Ա????Ϣ?????⣬MSP????ʶ???ѱ???????????б??????????ĵ??жԴ˽????????ۣ????????۸????????????չ??MSP?ģ???

We’ll talk more about local and channel MSPs in a moment. For now let’s talk more about what MSPs do in general.

?????Ժ????ϸ???۱???MSP???ŵ?MSP?? ????????????̸̸ͨ??????£?MSP????ô???ġ?

Mapping MSPs to Organizations

###??MSPӳ?䵽??֯

An organization is a managed group of members and can be something as big as a multinational corporation or as small as a flower shop. What’s most important about organizations (or orgs) is that they manage their members under a single MSP. Note that this is different from the organization concept defined in an X.509 certificate, which we’ll talk about later.

??֯??һ???ܹ???ij?Ա??֯????????????˾??????Ҳ?????񻨵?һ??С?? ??֯??orgs?? ????Ҫ??????????һ??MSP?¹??????ǵij?Ա????ע?⣬????X.509֤???ж??????֯???ͬ?????ǽ????Ժ????ۡ?

The exclusive relationship between an organization and its MSP makes it sensible to name the MSP after the organization, a convention you’ll find adopted in most policy configurations. For example, organization ORG1 would have an MSP called ORG1-MSP. In some cases an organization may require multiple membership groups – for example, where channels are used to perform very different business functions between organisations. In these cases it makes sense to have multiple MSPs and name them accordingly, e.g., ORG2-MSP-NATIONAL and ORG2-MSP-GOVERNMENT, reflecting the different membership roots of trust within ORG2 in the NATIONAL sales channel compared to the GOVERNMENT regulatory channel.

??֯????MSP֮??Ķ?ռ??ϵʹ??????֮֯??????MSP?????ǵģ????Ǵ?????????????ж?????õ?Լ???????磬??֯ORG1????һ????ΪORG1-MSP??MSP????ijЩ????£???֯??????Ҫ?????Ա?? - ???磬ʹ???ŵ?????֮֯??ִ?зdz???ͬ??ҵ???ܡ?????Щ????£??ж??MSP????Ӧ??????????????????ģ????磬ORG2-MSP-NATIONAL??ORG2-MSP-GOVERNMENT????ӳ????GOVERNMENT????ŵ???ȣ???NATIONAL?????ŵ??е?ORG2???β?ͬ??Ա????

MSP1

Two different MSP configurations for an organization. The first configuration shows the typical relationship between an MSP and an organization – a single MSP defines the list of members of an organization. In the second configuration, different MSPs are used to represent different organizational groups with national, international, and governmental affiliation.

һ????֯?????ֲ?ͬMSP???õ?һ????????ʾ??MSP????֮֯??ĵ??͹?ϵ - ????MSP????????֯??Ա?б ?ڵڶ????????У???ͬ??MSP???ڱ?ʾ???й??ң????ʺ??????????IJ?ͬ??֯?顣

Organizational Units and MSPs

####??֯??λ??MSP

An organization is often divided up into multiple organizational units (OUs), each of which has a certain set of responsibilities. For example, the ORG1 organization might have both ORG1-MANUFACTURING and ORG1-DISTRIBUTION OUs to reflect these separate lines of business. When a CA issues X.509 certificates, the OU field in the certificate specifies the line of business to which the identity belongs.

һ????֯ͨ????????Ϊ??? ??֯??λ ??OU????ÿ????֯??λ????һ???????Ρ????磬ORG1??֯????ͬʱ????ORG1-MANUFACTURING??ORG1-DISTRIBUTION OU???Է?ӳ??Щ??????ҵ???ߡ???CA?䷢X.509֤??ʱ??֤???е?OU?ֶ?ָ????ʶ?????ҵ???ߡ?

We’ll see later how OUs can be helpful to control the parts of an organization who are considered to be the members of a blockchain network. For example, only identities from the ORG1-MANUFACTURING OU might be able to access a channel, whereas ORG1-DISTRIBUTION cannot.

?Ժ????ǽ?????OU????????ڿ??Ʊ???Ϊ???????????Ա??һ????֯?????磬ֻ??ORG1-MANUFACTURING OU?еı?ʶ?????ܹ??????ŵ?????ORG1-DISTRIBUTION?е????ܡ?

Finally, though this is a slight misuse of OUs, they can sometimes be used by different* organizations in a consortium to distinguish each other. In such cases, the different organizations use the same Root CAs and Intermediate CAs for their chain of trust, but assign the OU field appropriately to identify members of each organization. We’ll also see how to configure MSPs to achieve this later.

??󣬾??????Ƕ?OU????΢???ã?????????ʱ???Ա??????еIJ?ͬ* ??֯???????ֱ˴ˡ???????????£???ͬ????֯??????????ʹ????ͬ?ĸ?CA???м?CA?????ʵ??ط???OU?ֶ??Ա?ʶÿ????֯?ij?Ա?? ???ǻ??????????????MSP?Ա??Ժ?ʵ?ִ?Ŀ?ġ?

Local and Channel MSPs

###????MSP???ŵ?MSP

MSPs appear in two places in a Blockchain network: channel configuration (channel MSPs), and locally on an actor’s premise (local MSP). Local MSPs defined for nodes (peer or orderer) and users (administrators that use the CLI or client applications that use the SDK). Every node and user must have a local MSP defined, as it defines who has administrative or participatory rights at that level and outside the context of a channel (who the administrators of a peer’s organization, for example).

MSP?????????????????е?????λ?ã??ŵ????ã? ?ŵ?MSP ???Լ??ڲ????ߵı??أ? ????MSP ???? ????MSP??Ϊ?ڵ㣨?ڵ????ڵ㣩???û???ʹ??CLI?Ĺ???Ա??ʹ??SDK?Ŀͻ???Ӧ?ó??򣩶??????ÿ???ڵ???û??????붨??һ??????MSP????Ϊ?????????ڸü???????ŵ?????֮??˭???й????????Ȩ?ޣ????磬һ???ڵ???֯?Ĺ???Ա????

In contrast, channel MSPs define administrative and participatory rights at the channel level. Every organization participating in a channel must have an MSP defined for it. Peers and orderers on a channel will all share the same view on channel MSPs, and will henceforth be able to authenticate correctly the channel participants. This means that if an organization wishes to join the channel, an MSP incorporating the chain of trust for the organization’s members would need to be included in the channel configuration. Otherwise transactions originating from this organization’s identities will be rejected.

?෴???ŵ?MSP???ŵ????????˹???Ͳ????Ȩ???? ?????ŵ???ÿ????֯????????һ??Ϊ?䶨???MSP??һ???ŵ??ϵĽڵ?ͱ???ڵ㽫???ŵ?MSP?Ϲ?????ͬ????ͼ?????Ҵ˺??ܹ???ȷ????֤?ŵ??????ߡ?????ζ?ţ????һ????֯ϣ????????ŵ???????Ҫ??????????֯??Ա????????MSP?????ŵ??????С? ???????Ը???֯????ݵĽ??׽????ܾ???

The key difference here between local and channel MSPs is not how they function, but their scope.

????MSP???ŵ?MSP֮??Ĺؼ????????????ǵ???????ʽ???????????ǵ? ??Χ ??

MSP2

Local and channel MSPs. The trust domain (e.g., organization) of each peer is defined by the peer’s local MSP, e.g., ORG1 or ORG2. Representation of an organization on a channel is achieved by the inclusion of the organization’s MSP in the channel. For example, the channel of this figure is managed by both ORG1 and ORG2. Similar principles apply for the network, orderers, and users, but these are not shown here for simplicity.

????MSP???ŵ?MSP?? ÿ???ڵ???????????磬??֯???ɽڵ?ı???MSP???壬????ORG1??ORG2??ͨ?????ŵ??а?????֯??MSP??ʵ??һ????֯???ŵ??ϵı?ʾ?????磬??ͼ???ŵ?ͬʱ??ORG1??ORG2??????Ƶ?ԭ???????????磬?ڵ???û?????Ϊ????????˴?δ??ʾ??Щԭ????

Local MSPs are only defined on the file system of the node or user to which they apply. Therefore, physically and logically there is only one local MSP per node or user. However, as channel MSPs are available to all nodes in the channel, they are logically defined once in the channel in its configuration. However, a channel MSP is instantiated on the file system of every node in the channel and kept synchronized via consensus. So while there is a copy of each channel MSP on the local file system of every node, logically a channel MSP resides on and is maintained by the channel or the network.

????MSP????????Ӧ?õĽڵ???û????ļ?ϵͳ?ϱ?????????ˣ????????Ϻ??߼??ϣ?ÿ???ڵ???û?ֻ??һ??????MSP?????ǣ??????ŵ?MSP???????ŵ??е????нڵ???????????????ϣ????????ŵ??н???һ???߼??ϵĶ??塣Ȼ?????ŵ?MSP???ŵ???ÿ???ڵ???ļ?ϵͳ?Ͻ???ʵ????????ͨ????ʶ????ͬ??????ˣ???Ȼÿ???ڵ?ı????ļ?ϵͳ?ϴ???ÿ???ŵ?MSP?ĸ????????߼???һ???ŵ?MSPפ???ڸ??ŵ??ϲ????ŵ???????ά????

You may find it helpful to see how local and channel MSPs are used by seeing what happens when a blockchain administrator installs and instantiates a smart contract, as shown in the diagram above.

ͨ???鿴??????????Ա??װ??ʵ???????ܺ?Լʱ?ᷢ??ʲô?????ܻ?????ʹ?ñ???MSP???ŵ?MSP???а?????????ͼ??ʾ??

An administrator B connects to the peer with an identity issued by RCA1 and stored in their local MSP. When B tries to install a smart contract on the peer, the peer checks its local MSP, ORG1-MSP, to verify that the identity of B is indeed a member of ORG1. A successful verification will allow the install command to complete successfully. Subsequently, B wishes to instantiate the smart contract on the channel. Because this is a channel operation, all organizations in the channel must agree to it. Therefore, the peer must check the MSPs of the channel before it can successfully commits this command. (Other things must happen too, but concentrate on the above for now.)

????ԱBʹ??RCA1????????ݸ??ڵ????Ӳ??洢???䱾??MSP?С???B?????ڽڵ??ϰ?װ???ܺ?Լʱ???ڵ????䱾??MSP ??ORG1-MSP??????֤B?????ȷʵ??ORG1?ij?Ա???ɹ???֤?????װ????ɹ???ɡ? ???Bϣ?????ŵ???ʵ???????ܺ?Լ??????????һ???ŵ????????ŵ??е???????֯??????ͬ??ò???????ˣ??ڵ?????ȼ???ŵ???MSP??Ȼ????ܳɹ??ύ????? ??????????????Ҳ???뷢?????????ڹ?ע?????????ݡ???

One can observe that channel MSPs, just like the channel itself is a logical construct. They only become physical once they are instantiated on the local filesystem of the peers of the channel orgs, and managed by it.

????Թ۲쵽?ŵ?MSP???????ŵ???????һ?? ?߼??ṹ??????ֻ?????ŵ???֯?Ľڵ?ı????ļ?ϵͳ?Ͻ???ʵ??????Ż??Ϊ?????ϵģ??????????й??

MSP Levels

###MSP??

The split between channel and local MSPs reflects the needs of organizations to administer their local resources, such as a peer or orderer nodes, and their channel resources, such as ledgers, smart contracts, and consortia, which operate at the channel or network level. It’s helpful to think of these MSPs as being at different levels, with MSPs at a higher level relating to network administration concerns while MSPs at a lower level handle identity for the administration of private resources. MSPs are mandatory at every level of administration – they must be defined for the network, channel, peer, orderer and users.

?ŵ?MSP?ͱ???MSP֮??ķ??뷴ӳ????֯?????䱾????Դ??????ڵ????ڵ㣩?????ŵ???Դ?????????ŵ??????缶????Ӫ?ķ????ʣ????ܺ?Լ?????ˣ??????󡣽???ЩMSP??Ϊ???ڲ?ͬ ??? ???а????ģ?MSP???????????????????صĸ??߼??𣬶??ϵͼ????MSP????˽????Դ??????????MSP??ÿ???????ζ??DZ???? -????Ϊ???磬?ŵ????ڵ㣬????ڵ???û?????MSP??

MSP3

MSP Levels. The MSPs for the peer and orderer are local, whereas the MSPs for a channel (including the network configuration channel) are shared across all participants of that channel. In this figure, the network configuration channel is administered by ORG1, but another application channel can be managed by ORG1 and ORG2. The peer is a member of and managed by ORG2, whereas ORG1 manages the orderer of the figure. ORG1 trusts identities from RCA1, whereas ORG2 trusts identities from RCA2. Note that these are administration identities, reflecting who can administer these components. So while ORG1 administers the network, ORG2.MSP does exist in the network definition.

MSP?㡣 ?ڵ?ͱ???ڵ??MSP?DZ??صģ????ŵ???MSP???????????????ŵ????ڸ??ŵ??????в?????֮?乲? ?ڴ?ͼ?У??????????ŵ???ORG1???????һ??Ӧ?ó????ŵ?????ORG1??ORG2????ڵ???ORG2?ij?Ա????ORG2?????ORG1????ͼ?еı???ڵ㡣 ORG1????????RCA1????ݣ???ORG2????????RCA2????ݡ???ע?⣬??Щ?ǹ?????ݣ???ӳ??˭???Թ?????Щ????? ??ˣ?????ORG1???????磬??ORG2.MSPȷʵ?????????綨??????

  • Network MSP: The configuration of a network defines who are the members in the network — by defining the participant organizations MSPs — as well as which of these members are authorized to perform administrative tasks (e.g., creating a channel).
  • ????MSP?? ?????????ͨ?????????????֯??MSP?Լ???Щ??Ա????Ȩִ?й??????????磬?????ŵ????????? ˭???????еij?Ա??
  • Channel MSP: It is important for a channel to maintain the MSPs of its members separately. A channel provides private communications between a particular set of organizations which in turn have administrative control over it. Channel policies interpreted in the context of that channel’s MSPs define who has ability to participate in certain action on the channel, e.g., adding organizations, or instantiating chaincodes. Note that there is no necessary relationship between the permission to administrate a channel and the ability to administrate the network configuration channel (or any other channel). Administrative rights exist within the scope of what is being administrated (unless the rules have been written otherwise – see the discussion of the ROLE attribute below).
  • ?ŵ?MSP?? ?ŵ??????ܹ?????ά?????Ա??MSP???ŵ????ض???һ????֮֯???ṩ˽??ͨ?ţ?????Щ??֯?ַ??? ????????й?????ơ??ڸ??ŵ???MSP?Ļ????н??͵??ŵ????Զ???˭???????????ŵ??ϵ?ijЩ??????磬?????֯ ??ʵ?????????롣??ע?⣬????һ???ŵ???Ȩ?????????????????ŵ??????κ??????ŵ?????????֮??û?б?Ȼ?Ĺ?ϵ ???????Ȩ?޴????ڹ??Χ?ڣ????ǹ????Ѿ????б?д - ??????????ROLE???Ե????ۣ???
  • Peer MSP: This local MSP is defined on the file system of each peer and there is a single MSP instance for each peer. Conceptually, it performs exactly the same function as channel MSPs with the restriction that it only applies to the peer where it is defined. An example of an action whose authorization is evaluated using the peer’s local MSP is the installation of a chaincode on the peer premise.
  • ?ڵ?MSP?? ??ÿ???ڵ???ļ?ϵͳ?϶??屾??MSP??????ÿ???ڵ㶼??һ??MSPʵ???? ?Ӹ????Ͻ?????ִ?????? ??MSP??ȫ??ͬ?Ĺ??ܣ?Լ?????????????ڶ??????Ľڵ㡣 ʹ?ýڵ?ı???MSP????????Ȩ?IJ?????һ??ʾ?????ڽڵ? ǰ???°?װ?????롣
  • Orderer MSP: Like a peer MSP, an orderer local MSP is also defined on the file system of the node and only applies to that node. Like peer nodes, orderers are also owned by a single organization and therefore have a single MSP to list the actors or nodes it trusts.
  • ????ڵ?MSP?? ??ڵ?MSPһ????????ڵ㱾??MSPҲ?ڽڵ???ļ?ϵͳ?϶??岢?ҽ??????ڸýڵ㡣 ??ڵ?һ ????????ڵ?Ҳ?ɵ?????֯ӵ?У????ֻ??һ??MSP???г??????εIJ????߻?ڵ㡣

MSP Structure

###MSP?ṹ

So far, you’ve seen that the two most important elements of an MSP are the specification of the (root or intermediate) CAs that are used to used to establish an actor’s or node’s membership in the respective organization. There are, however, more elements that are used in conjunction with these two to assist with membership functions.

??ĿǰΪֹ?????Ѿ?????һ??MSP??????????Ҫ??Ԫ???ǣ??????м䣩CA?ĸ?ʽ??????????Ӧ????֯?н????????߻?ڵ?ij?Ա?ʸ񡣵??ǣ??и????Ԫ??????????Ԫ?ؽ??ʹ????Э????Ա??ݵĹ??ܡ?

MSP4

The figure above shows how a local MSP is stored on a local filesystem. Even though channel MSPs are not physically structured in exactly this way, it’s still a helpful way to think about them.

??ͼ??ʾ?˱???MSP??δ洢?ڱ????ļ?ϵͳ?С??????ŵ?MSP??????ṹ??????ˣ???????Ȼ??һ?????õķ?ʽ??˼????????

As you can see, there are nine elements to an MSP. It’s easiest to think of these elements in a directory structure, where the MSP name is the root folder name with each subfolder representing different elements of an MSP configuration.

??????????һ??MSP?оŸ?Ԫ?ء???򵥵ķ???????Ŀ¼?ṹ?п?????ЩԪ?أ?????MSP?????Ǹ??ļ??????ƣ?ÿ?????ļ??д???MSP???õIJ?ͬԪ?ء?

Let’s describe these folders in a little more detail and see why they are important.

?????Ǹ???ϸ????????Щ?ļ??У???????Ϊʲô???Ǻ???Ҫ??

  • Root CAs: This folder contains a list of self-signed X.509 certificates of the Root CAs trusted by the organization represented by this MSP. There must be at least one Root CA X.509 certificate in this MSP folder.

  • ??CA?? ???ļ??а?????MSP????ʾ????֯?????εĸ?CA????ǩ??X.509֤???б ??MSP?ļ????б?????????һ ??Root CA X.509֤?顣

    This is the most important folder because it identifies the CAs from which all other certificates must be derived to be considered members of the corresponding organization. ????????Ҫ???ļ??У???Ϊ????ʶCA???Ӹ?CA?????뱻??Ϊ??Ӧ??֯?ij?Ա????????????????????֤??.

  • Intermediate CAs: This folder contains a list of X.509 certificates of the Intermediate CAs trusted by this organization. Each certificate must be signed by one of the Root CAs in the MSP or by an Intermediate CA whose issuing CA chain ultimately leads back to a trusted Root CA.

  • ?м?CA?? ???ļ??а???????֯???ε??м?CA??X.509֤???б ÿ??֤???????MSP?е?һ????CA???м?CAǩ?? ???м?CA?䷢??CA?????ջ᷵?ص??????εĸ?CA.

    Intuitively to see the use of an intermediate CA with respect to the corresponding organization’s structure, an intermediate CA may represent a different subdivision of the organization, or the organization itself (e.g., if a commercial CA is leveraged for the organization’s identity management). In the latter case other intermediate CAs, lower in the CA hierarchy can be used to represent organization subdivisions. Here you may find more information on best practices for MSP configuration. Notice, that it is possible to have a functioning network that does not have any Intermediate CA, in which case this folder would be empty.

    ֱ?۵ؿ??????????Ӧ??֯?Ľṹʹ???м?CA???м?CA???Ա?ʾ??֯????֯????IJ?ͬϸ?֣????磬???һ????ҵCA ??????֯????ݹ?????ں?һ??????£?CA??νṹ?нϵ͵??????м?CA?????ڱ?ʾ??֯??ϸ?֡???????????? ?ҵ??й?MSP???õ????ʵ???ĸ?????Ϣ????ע?⣬????ʹһ????????????????û???κ??м?CA????????????£????? ???н?Ϊ?ա?

    Like the Root CA folder, this folder defines the CAs from which certificates must be issued to be considered members of the organization.

    ???CA?ļ???һ???????ļ??ж????˱???䷢֤????ܱ???Ϊ??֯??Ա??CA.

  • Organizational Units (OUs): These are listed in the $FABRIC_CFG_PATH/msp/config.yaml file and contain a list of organizational units, whose members are considered to be part of the organization represented by this MSP. This is particularly useful when you want to restrict the members of an organization to the ones holding an identity (signed by one of MSP designated CAs) with a specific OU in it.

  • ??֯??λ??OU???? ??Щ??λ????$ FABRIC_CFG_PATH / msp / config.yaml?ļ??У???????һ????֯??λ???? ????Ա????Ϊ??MSP?????????֯??һ???֡? ????ϣ??????֯??Ա????Ϊӵ?????а????ض?OU????ݣ???MSPָ?? ??CA֮һǩ?????ij?Աʱ???˹??????????á?

    Specifying OUs is optional. If no OUs are listed, all the identities that are part of an MSP – as identified by the Root CA and Intermediate CA folders – will be considered members of the organization.

    ?ض?OU?ǿ?ѡ?ġ? ???δ?г??κ?OU??????ΪMSPһ???ֵ???????ݣ??ɸ?CA???м?CA?ļ??б?ʶ????????Ϊ??֯?? ??Ա??

  • Administrators: This folder contains a list of identities that define the actors who have the role of administrators for this organization. For the standard MSP type, there should be one or more X.509 certificates in this list.

  • ????Ա?? ???ļ??а???һ????ʶ?б???ڶ?????д???֯????Ա??ɫ?IJ????ߡ? ???ڱ?׼MSP???ͣ????б??? Ӧ????һ??????X.509֤?顣

    It’s worth noting that just because a actor has the role of an administrator it doesn’t mean that they can administer particular resources! The actual power a given identity has with respect to administering the system, is determined by the policies that manage system resources. For example, a channel policy might specify that ORG1-MANUFACTURING administrators have the rights to add new organizations to the channel, whereas the ORG1-DISTRIBUTION administrators have no such rights.

    ֵ??ע????ǣ???????Ϊһ???????߾??й???Ա?Ľ?ɫ????????ζ?????ǿ??Թ????ض?????Դ??????????ڹ???ϵͳ?? ???ʵ?ʹ?Ч?ɹ???ϵͳ??Դ?IJ??Ծ????????磬?ŵ????Կ???ָ??ORG1-MANUFACTURING????Ա??Ȩ??????֯??ӵ? ?ŵ?????ORG1-DISTRIBUTION????Ա??û?д???Ȩ?ޡ?

    Even though an X.509 certificate has a ROLE attribute (specifying, for example, that a actor is an admin), this refers to a actor’s role within its organization rather than on the blockchain network. This is similar to the purpose of the OU attribute, which – if it has been defined – refers to a actor’s place in the organization.

    ??ʹX.509֤?????ROLE???ԣ????磬ָ??ij?????????ǹ???Ա??????Ҳ??ָ??????֯?ڶ??????????????????еĽ?ɫ ??????????OU???Ե?Ŀ?ģ?????Ѿ????壬??ָ??????֯?в????ߵ?λ?á?

    The ROLE attribute can be used to confer administrative rights at the channel level if the policy for that channel has been written to allow any administrator from an organization (or certain organizations) permission to perform certain channel functions (such as instantiating chaincode). In this way, an organization role can confer a network role. This is conceptually similar to how having a driver’s license issued by the US state of Florida entitles someone to drive in every state in the US.

    ????ѱ?д???ŵ??IJ???????????֯????ijЩ??֯?????κι???Աִ??ijЩ?ŵ????ܣ?????ʵ?????????룩????ROLE ???Կ????????ŵ??????????Ȩ?ޡ? ͨ?????ַ?ʽ????֯??ɫ???Ը????????ɫ?? ???ڸ?????????????????????? ?ݰ䷢?ļ?ʻִ???????Ȩij??????????ÿ???ݿ?????

  • Revoked Certificates: If the identity of a actor has been revoked, identifying information about the identity – not the identity itself – is held in this folder. For X.509-based identities, these identifiers are pairs of strings known as Subject Key Identifier (SKI) and Authority Access Identifier (AKI), and are checked whenever the X.509 certificate is being used to make sure the certificate has not been revoked.

  • ????֤?飺 ????????ߵ?????ѱ??????????ڴ??ļ????б????й???ݵ???Ϣ - ????????ݱ?? ???ڻ??? X.509????ݣ???Щ????dz?Ϊ??????Կ??ʶ????SKI??????Ȩ???ʱ?ʶ????AKI?????ַ????ԣ?????ÿ??ʹ??X.509֤ ????ȷ??֤????δ??????ʱ???ͻ??????м?顣

    This list is conceptually the same as a CA’s Certificate Revocation List (CRL), but it also relates to revocation of membership from the organization. As a result, the administrator of an MSP, local or channel, can quickly revoke a actor or node from an organization by advertising the updated CRL of the CA the revoked certificate as issued by. This “list of lists” is optional. It will only become populated as certificates are revoked.

    ???б??ڸ???????CA??֤??????бCRL????ͬ??????Ҳ?????֯?г?????Ա????йء? ??ˣ?MSP?Ĺ???Ա?????? ???ŵ???????ͨ????CA???????ѳ???֤????????CA???µ?CRL?????ٳ?????֯?еIJ????߻?ڵ㡣 ????ǿ?ѡ?ġ? ?? ֻ????֤?鱻????ʱ??䡣

  • Node Identity: This folder contains the identity of the node, i.e., cryptographic material that — in combination to the content of KeyStore – would allow the node to authenticate itself in the messages that is sends to other participants of its channels and network. For X.509 based identities, this folder contains an X.509 certificate. This is the certificate a peer places in a transaction proposal response, for example, to indicate that the peer has endorsed it – which can subsequently be checked against the resulting transaction’s endorsement policy at validation time.

  • ?ڵ??ʶ?? ???ļ??а????ڵ?ı?ʶ????????KeyStore??ϵļ??ܲ??Ͻ?????ڵ??ڷ??͸????ŵ???????????? ?????ߵ???Ϣ?ж?????????????֤?? ???ڻ???X.509?ı?ʶ?????ļ??а??? X.509֤???? ???ǽڵ??ڽ??????? ??Ӧ?з??õ?֤?飬???磬????ָʾ?ڵ??Ѿ??Ͽ??? - ???????ڿ???֤ʱ???ڶԽ?????׵??Ͽɲ??Խ??м?顣

    This folder is mandatory for local MSPs, and there must be exactly one X.509 certificate for the node. It is not used for channel MSPs.

    ???ļ??ж??ڱ???MSP?DZ???ģ????Ҹýڵ????ֻ??һ??X.509֤?顣 ?????????ŵ?MSP??

  • KeyStore for Private Key: This folder is defined for the local MSP of a peer or orderer node (or in an client’s local MSP), and contains the node’s signing key. This key matches cryptographically the node’s identity included in Node Identity folder and is used to sign data – for example to sign a transaction proposal response, as part of the endorsement phase.

  • ˽Կ??KeyStore?? ???ļ?????Ϊ?ڵ????ڵ?ı???MSP????ͻ??˵ı???MSP??????ģ??????ڵ?? ǩ ????Կ?? ????Կ?Լ??ܷ?ʽƥ???ڽڵ??ʶ?ļ????а????Ľڵ??ʶ????????ǩ?????? - ????ǩ??????????Ӧ ????Ϊ?Ͽɽ׶ε?һ???֡?

    This folder is mandatory for local MSPs, and must contain exactly one private key. Obviously, access to this folder must be limited only to the identities, users who have administrative responsibility on the peer.

    ???ļ??ж??ڱ???MSP?DZ???ģ????ұ???ֻ????һ??˽Կ?? ??Ȼ???Դ??ļ??еķ???Ȩ?ޱ???????ڽڵ??Ͼ??й? ??ְ????û???ݡ?

    Configuration of a channel MSPs does not include this part, as channel MSPs aim to offer solely identity validation functionalities, and not signing abilities.

    ?ŵ? MSP?????ò??????˲??֣???Ϊ?ŵ?MSPּ?ڽ??ṩ?????֤???ܣ???????ǩ?????ܡ?

  • TLS Root CA: This folder contains a list of self-signed X.509 certificates of the Root CAs trusted by this organization for TLS communications. An example of a TLS communication would be when a peer needs to connect to an orderer so that it can receive ledger updates.

  • TLS??CA?? ???ļ??а???????֯???ε????? TLSͨ?? ?ĸ?CA????ǩ??X.509֤???б TLSͨ?ŵ?һ??ʾ?? ?ǵ??ڵ???Ҫ???ӵ?????ڵ??Ա??????Խ??շ????ʸ???ʱ??

    MSP TLS information relates to the nodes inside the network, i.e., the peers and the orderers, rather than those that consume the network – applications and administrators.

    MSP TLS??Ϣ?漰?????ڵĽڵ㣬???ڵ?ͱ???ڵ㣬????????Щ????????Ľڵ? - Ӧ?ó???͹???Ա??

    There must be at least one TLS Root CA X.509 certificate in this folder.

    ???ļ????б?????????һ??TLS??CA X.509֤?顣

  • TLS Intermediate CA: This folder contains a list intermediate CA certificates CAs trusted by the organization represented by this MSP for TLS communications. This folder is specifically useful when commercial CAs are used for TLS certificates of an organization. Similar to membership intermediate CAs, specifying intermediate TLS CAs is optional.

  • TLS?м?CA?? ???ļ??а???һ???б?м?CA֤?鱻??MSP?????????֯?????Σ???????TLSͨ?š?????ҵCA???? һ????֯??TLS֤??ʱ?????ļ????ر????á????Ա?м?CA???ƣ??м?TLS CA?ǿ?ѡ?ġ?

    For more information on TLS, click here.

    ?й?TLS?ĸ?????Ϣ???뵥???˴?

If you’ve read this doc as well as our doc on Identity), you should have a pretty good grasp of how identities and membership work in Hyperledger Fabric. You’ve seen how a PKI and MSPs are used to identify the actors collaborating in a blockchain network. You’ve learned how certificates, public/private keys, and roots of trust work, in addition to how MSPs are physically and logically structured.

??????Ѿ??Ķ??????ĵ??Լ????ǵ?????ĵ?????ô??Ӧ?÷dz??˽?Hyperledger Fabric?е???ݺͳ?Ա??ݡ? ???Ѿ??˽??????ʹ??PKI??MSP??ʶ????????????????Э???IJ????ߡ? ????MSP????????߼??ṹ֮?⣬?????˽???֤?飬??Կ/˽Կ?????θ??Ĺ???ԭ?

Peers

peer节点

A blockchain network is primarily comprised of a set of peer nodes. Peers are a fundamental element of the network because they host ledgers and smart contracts. Recall that a ledger immutably records all the transactions generated by smart contracts. Smart contracts and ledgers are used to encapsulate the shared processes and shared information in a network, respectively. These aspects of a peer make them a good starting point to understand Hyperledger Fabric network. 一个区块链网络主要由peer节点组成。peer节点是区块链网络的基本元素,因为它们存贮着账本和智能合约。请注意账本记录了由智能合约生成的所有交易,且记录后不可更改。智能合约和账本被分别用来封装一个网络中的共享进程和共享信息。了解节点的这些特性是我们了解Hyperledger Fabric网络的一个良好开始。

Other elements of the blockchain network are of course important: ledgers and smart contracts, orderers, policies, channels, applications, organizations, identities, and membership and you can read more about them in their own dedicated topics. This topic focusses on peers, and their relationship to those other elements in a Hyperledger Fabric blockchain network. 当然,区块链网络中的其他组成元素同样重要:账本和智能合约,排序节点,策略,通道,应用,组织(机制),身份,以及成员资格,你能在这些元素的专题中读到更多关于它们的信息。本专题主要关注peer节点及其与超级账本区块链网络中其他元素的关系。

Peer1

A blockchain network is formed from peer nodes, each of which can hold copies of ledgers and copies of smart contracts. In this example, the network N is formed by peers P1, P2 and P3. P1, P2 and P3 each maintain their own instance of the ledger L1. P1, P2 and P3 use chaincode S1 to access their copy of the ledger L1. 一个区块链网络由peer节点构成,每个节点都保存账本和智能合约的副本。在本例中,节点P1, P2和P3组成了网络N,且每个节点都各自保存了账本L1的实例。三个peer节点使用链码来访问它们各自的账本L1副本 。

Peers can be created, started, stopped, reconfigured, and even deleted. They expose a set of APIs that enable administrators and applications to interact with the services that they provide. We’ll learn more about these services in this topic. peer节点可以被创建,激活,终止,重新配置,终止甚至删除。节点对外有一组应用程序编程接口,这些接口 使管理员和应用程序能够与它们提供的服务交互。在本专题中我们会学习更多有关这些服务的知识。

A word on terminology

一个专业术语

Hyperledger Fabric implements smart contracts with a technology concept it calls chaincode – simply a piece of code that accesses the ledger, written in one of the supported programming languages. In this topic, we’ll usually use the term chaincode, but feel free to read it as smart contract if you’re more used to this term. It’s the same thing!

Hyperledger Fabric通过一种称为链码的技术来执行智能合约,链码简单说就是以某种Fabric支持的编程 语言编写的一段与账本连接的代码。在本专题中,我们会很频繁地用到链码这个概念,不过若你对智能 合约比较熟悉,你也可以自由地将前者当作后者。因为它们之间并无区别!

Ledgers and Chaincode

##账本与链码

Let’s look at a peer in a little more detail. We can see that it’s the peer that hosts both the ledger and chaincode. More accurately, the peer actually hosts instances of the ledger, and instances of chaincode. Note that this provides a deliberate redundancy in a Fabric network – it avoids single points of failure. We’ll learn more about the distributed and decentralized nature of a blockchain network later in this topic. 让我们更详细地研究一下peer节点。我们可以看到它是一种保存保存了账本和链码的节点。更准确地说, peer节点实际上保存保存的是账本和链码的实例。注意这在Fabric网络中提供了有意的冗余以避免单点故障。 在本专题接下来的部分,我们会学到更多关于区块链网络分布式和去中心化的特性。

Peer2

A peer hosts instances of ledgers and instances of chaincodes. In this example, P1 hosts an instance of ledger L1 and an instance of chaincode S1. There can be many ledgers and chaincodes hosted on an individual peer. 一个peer节点保存了账本和链码的实例。在本例中,节点P1保存了账本L1和链码S1的各一个实例。 一个独立节点其实可以保存多个账本和链码。

Because a peer is a host for ledgers and chaincodes, applications and administrators must interact with a peer if they want to access these resources. That’s why peers are considered the most fundamental building blocks of a Hyperledger Fabric blockchain network. When a peer is first created, it has neither ledgers nor chaincodes. We’ll see later how ledgers get created, and chaincodes get installed, on peers. 由于peer节点保存了账本和链码,所以如果应用程序和管理员想访问这些资源,他们就必须和节点交互。 这也是为什么节点被看作组成Hyperledger Fabric区块链网络中区块的最基本元素。一个节点在创建之初 并不保存账本或链码。稍后我们会看到节点上的账本如何被创建以及链码如何被安装。

Multiple Ledgers
多个账本

A peer is able to host more than one ledger, which is helpful because it allows for a flexible system design. The simplest peer configuration is to have a single ledger, but it’s absolutely appropriate for a peer to host two or more ledgers when required. 一个peer节点可以保存多个账本,这是十分有用的,因为这允许了灵活的系统设计。最简单的节点配置 是一个节点保存一个账本,但是,当有需要时,一个peer节点保存两个或多个账本是绝对合理的。

Peer3

A peer hosting multiple ledgers. Peers host one or more ledgers, and each ledger has zero or more chaincodes that apply to them. In this example, we can see that the peer P1 hosts ledgers L1 and L2. Ledger L1 is accessed using chaincode S1. Ledger L2 on the other hand can be accessed using chaincodes S1 and S2. 保存多个账本的节点。节点保存一个或多个账本,且每个账本具有适用于它们的零个或多个链码。 在本例中,我们能发现节点P1保存了账本L1和L2。使用链码S1可以访问账本L1,另一方面使用链码S1和S2 可以访问账本L2。

Although it is perfectly possible for a peer to host a ledger instance without hosting any chaincodes which access it, it’s very rare that peers are configured this way. The vast majority of peers will have at least one chaincode installed on it which can query or update the peer’s ledger instances. It’s worth mentioning in passing whether or not users have installed chaincodes for use by external applications, peers also have special system chaincodes that are always present. These are not discussed in detail in this topic. 尽管即使不保存访问账本的链码,一个peer节点依然完全有可能保存一个账本的实例,却很少有节点以 这种方式配置。绝大多数节点上都会至少安装一个链码,这个链码可以查询或更新节点账本的实例。值得一提的是, 通过用户是否通过外部应用程序安装了链码以供使用,节点上也总存在特殊的系统链码。这些在本专题 中不再详细讨论。

Multiple Chaincodes

###多个链码

There isn’t a fixed relationship between the number of ledgers a peer has and the number of chaincodes that can access that ledger. A peer might have many chaincodes and many ledgers available to it. 一个peer节点所保存的账本数目与能访问该账本的链码数目并无固定关系。一个peer节点可能有多个链码和 多个能访问的账本。

Peer4

An example of a peer hosting multiple chaincodes. Each ledger can have many chaincodes which access it. In this example, we can see that peer P1 hosts ledgers L1 and L2. L1 is accessed by chaincodes S1 and S2, whereas L2 is accessed by S3 and S1. We can see that S1 can access both L1 and L2. 单个peer节点保存多个链码的例子。每个账本都能有多个能对其进行访问的链码。在本例中, 我们能看到节点P1保存了账本L1和L2。L1由链码S1和S2访问,而L2由链码S3和S1访问。我们 能看到S1既能访问L1也能访问L2。

We’ll see a little later why the concept of channels in Hyperldeger Fabric is important when hosting multiple ledgers or multiple chaincodes on a peer. 稍后我们会看到在一个peer节点保存多个账本或多个链码的情况下,通道这个概念在 Hyperledger Fabric中的重要性。

Applications and Peers

##应用程序和节点

We’re now going to show how applications interact with peers to access the ledger. Ledger-query interactions involve a simple three step dialogue between an application and a peer; ledger-update interactions are a little more involved, and require two extra steps. We’ve simplified these steps a little to help you get started with Hyperledger Fabric, but don’t worry – what’s most important to understand is the difference in application-peer interactions for ledger-query compared to ledger-update transaction styles. 现在我们要展示应用程序如何通过与peer节点交互来实现访问账本。账本查询交互涉及应用程序和 peer节点间简单的三步会话;账本更新交互与之联系更加密切,它还要求额外的两步。我们已经对这些 步骤稍作简化以便你开始了解Hyperledger Fabric,不过请勿担心——最有必要理解的是程序—节点交互在执行账本查询时相比于账本更新时呈现的差异。

Applications always connect to peers when they need to access ledgers and chaincodes. The Hyperledger Fabric Software Development Kit (SDK) makes this easy for programmers – its APIs enable applications to connect to peers, invoke chaincodes to generate transactions, submit transactions to the network that will get ordered and committed to the distributed ledger, and receive events when this process is complete. 当需要访问账本和链码时,应用程序都会连接peer节点。Hyperledger Fabric软件开发工具包使其对 程序员而言更加容易——它的应用编程接口使应用程序能连接到节点,调用链码来生成交易,并将交易 上传到网络中进行排序然后提交到分布式账本中,并且在处理完成时接收事件。

Through a peer connection, applications can execute chaincodes to query or update the ledger. The result of a ledger query transaction is returned immediately, whereas ledger updates involve a more complex interaction between applications, peers, and orderers. Let’s investigate in a little more detail. 通过与节点的连接,应用程序可以执行链码以查询或更新账本。账本查询交易的结果会立即反馈, 而账本更新则涉及应用程序,peer节点和排序节点之间更复杂的交互,让我们更详细地研究一下。

Peer6

Peers, in conjunction with orderers, ensure that the ledger is kept up-to-date on every peer. In this example application A connects to P1 and invokes chaincode S1 to query or update the ledger L1. P1 invokes S1 to generate a proposal response that contains a query result or a proposed ledger update. Application A receives the proposal response, and for queries the process is now complete. For updates, A builds a transaction from the all the responses, which it sends it to O1 for ordering. O1 collects transactions from across the network into blocks, and distributes these to all peers, including P1. P1 validates the transaction before applying to L1. Once L1 is updated, P1 generates an event, received by A, to signify completion. peer节点与排序节点相连,保证每一个peer节点上的账本都是最新的。在本例中应用程序A连接 peer节点P1且调用链码S1以查询或更新账本L1。P1调用S1来生成一个包含了查询结果或提议 账本更新的提案响应。应用程序A接收该响应,此时查询过程便完成了。为了执行更新,程序A从所有它发送到排序节点O1以进行排序的响应中建立交易。排序节点O1收集整个网络中的交易并写入区块,并将 它们分发到包括P1在内的所有peer节点上。P1在将其应用到L1之前先进行验证。一旦L1更新完成,P1就 生成一个事件,由A来接收以表示完成。

A peer can return the results of a query to an application immediately because all the information required to satisfy the query is in the peer’s local copy of the ledger. Peers do not consult with other peers in order to return a query to an application. Applications can, however, connect to one or more peers to issue a query – for example to corroborate a result between multiple peers, or retrieve a more up-to-date result from a different peer if there’s a suspicion that information might be out of date. In the diagram, you can see that ledger query is a simple three step process. 一个peer节点可以将查询的结果立即反馈给应用程序,因为完成查询所需要的全部信息都在peer节点 本地的账本副本中。peer节点之间不需要相互协商来将查询反馈给应用程序。但是,应用程序可以连接 一个或多个peer节点来提出查询——比如要确认多个peer节点间的查询结果,或者若怀疑信息可能过期可以 从另一个peer节点恢复最新的结果。在该图中,你能发现账本查询是简单的三个步骤。

An update transaction starts in the same way as a query transaction, but has two extra steps. Although ledger-updating applications also connect to peers to invoke a chaincode, unlike with ledger-querying applications, an individual peer cannot perform a ledger update at this time, because other peers must first agree to the change – a process called consensus. Therefore, peers return to the application a proposed update – one that this peer would apply subject to other peers’ prior agreement. The first extra step – four – requires that applications send an appropriate set of matching proposed updates to the entire network of peers as a transaction for commitment to their respective ledgers. This is achieved by the application using an orderer to package transactions into blocks, and distribute them to the entire network of peers, where they can be verified before being applied to each peer’s local copy of the ledger. As this whole ordering processing takes some time to complete (seconds), the application is notified asynchronously, as shown in step five. 一个更新事件的开始方式和查询事件一样,但还有额外的两步。尽管账本更新程序也连接peer节点以调用 链码,但与账本查询程序不同,单个peer节点此时不能执行账本更新,因为其他节点必须首先认同 账本的改变——即一个称为共识的过程。所以,peer节点为应用程序反馈 提议的更新——该更新将 被该节点加入其他节点先前的认同中。第一个额外步骤——four——要求应用程序向整个节点网络发送 一组合适的匹配提议更新,作为提交到各个节点各自账本的交易。该过程通过应用程序使用排序节点将交易打包成区块, 并将它们分发到整个节点网络上来完成,网络中的这些交易在被加入到每个节点的账本副本前会 被验证。由于整个排序进程需要耗费一些时间来完成(几秒),故该程序是异步通知的,如第五步所示。

Later in this topic, you’ll learn more about the detailed nature of of this ordering process – and for a really detailed look at this process see the Transaction Flow topic. 在本专题后面的部分中,你会学习更多关于排序过程的详细原理——若要了解该进程更加详细的内容, 参见专题交易流程。

Peers and Channels

Although this topic is about peers rather than channels, it’s worth spending a little time understanding how peers interact with each other, and applications, via channels – a mechanism by which a set of components within a blockchain network can communicate and transact privately. 尽管本专题更多涉及的是peer节点而非通道,我们也值得花一点时间来了解各peer节点之间、peer节点与应 用程序之间是如何通过通道实现交互的——这是一种区块链网络中一组组成元素之间可以私下 交流与交易的机制。

These components are typically peer nodes, orderer nodes, and applications, and by joining a channel they agree to come together to collectively share and manage identical copies of the ledger for that channel. Conceptually you can think of channels as being similar to groups of friends (though the members of a channel certainly don’t need to be friends!). A person might have several groups of friends, with each group having activities they do together. These groups might be totally separate (a group of work friends as compared to a group of hobby friends), or there can be crossover between them. Nevertheless each group is its own entity, with “rules” of a kind. 这些组成元素主要有peer节点,排序节点和应用程序,通过加入同一通道,上述组成元素可以在通道 内共享和管理相同的账本副本。从概念上你可以把通道认为是一群朋友的集合(尽管通道内的成员不 需要是朋友)。一个人可能有好几群朋友,每一群的朋友都有他们的集体活动。这些不同的朋友圈子可能完 全不相干(如一群工作伙伴相比于一群有共同爱好的朋友),或者他们之间也可以有交集。然而每一群 都各自为一个实体,有自己的一套“规则”。

Peer5

Channels allow a specific set of peers and applications to communicate with each other within a blockchain network. In this example, application A can communicate directly with peers P1 and P2 using channel C. You can think of the channel as a pathway for communications between particular applications and peers. (For simplicity, orderers are not shown in this diagram, but must be present in a functioning network.) 通道允许一组特定的peer节点和应用程序在区块链网络中互相交流。在本例中,程序A可以通过通道C 直接和节点P1和P2交流。你可以把通道认为是特定的节点和程序之间交流的途径(简单起见, 本图并未画出排序节点,但其在一个功能网络中必须出现。)

We see that channels don’t exist in the same way that peers do – it’s more appropriate to think of a channel as a logical structure that is formed by a collection of physical peers. It is vital to understand this point – peers provide the control point for access to, and management of, channels. 我们看到通道的存在方式与peer节点不同——将通道认为是由一组物理节点组成的逻辑结构更恰当。 明白这一点至关重要——peer节点提供了访问和管理通道的控制点。

Peers and Organizations

##peer节点与组织

Now that you understand peers and their relationship to ledgers, chaincodes and channels, you’ll be able to see how multiple organizations come together to form a blockchain network. 现在你明白了peer节点及其与账本、链码和通道的关系,你能看到多个组织怎样共同组成一个区块链网络。

Blockchain networks are administered by a collection of organizations rather than a single organization. Peers are central to how this kind of distributed network is built because they are owned by – and are the connection points to the network for – these organizations. 区块链网络由多个而非单个组织管理。peer节点对于这种分布式网络的建立起着核心作用,因为它们是 由这些组织拥有的,并且是这些组织的网络连接点。

Peer8

Peers in a blockchain network with multiple organizations. The blockchain network is built up from the peers owned and contributed by the different organizations. In this example, we see four organizations contributing eight peers to form a network. The channel C connects five of these peers in the network N – P1, P3, P5, P7 and P8. The other peers owned by these organizations have not been joined to this channel, but are typically joined to at least one other channel. Applications that have been developed by a particular organization will connect to their own organization’s peers as well as those of different organizations. Again,for simplicity, an orderer node is not shown in this diagram. 区块链网络中与多个组织相连的peer节点。不同组织拥有并贡献的peer节点建立了区块链网络。在本例中, 我们看到四个组织贡献了八个peer节点组成一个网络。在网络N中,通道C连接了这八个节点中的五个——P1、 P3、P5、P7和P8。这些组织拥有的其他节点并未接入该信道,但通常都接入至少一个其他通道。某个组织 开发的应用程序会与本组织和其他组织的节点相连。重申一遍,简单起见,本图未展示排序节点。

It’s really important that you can see what’s happening in the formation of a blockchain network. The network is both formed and managed by the multiple organizations who contribute resources to it. Peers are the resources that we’re discussing in this topic, but the resources an organization provides are more than just peers. There’s a principle at work here – the network literally does not exist without organizations contributing their individual resources to the collective network. Moreover, the network grows and shrinks with the resources that are provided by these collaborating organizations. 重要的是你能看到在该区块链网络结构中发生了什么。网络是由贡献资源的多个组织组成并管理的。 本专题中我们讨论的资源即是peer节点,但组织提供的资源远非于此。可运行的区块链网络有一个原 则——如果各成员组织没有向这个集体网络贡献他们的个体资源,此网络原则上就不存在了。此外,网络也随着这些协作组织 提供的资源扩大和缩小。

You can see that (other than the ordering service) there are no centralized resources – in the example above, the network, N, would not exist if the organizations did not contribute their peers. This reflects the fact that the network does not exist in any meaningful sense unless and until organizations contribute the resources that form it. Moreover, the network does not depend on any individual organization – it will continue to exist as long as one organization remains, no matter which other organizations may come and go. This is at the heart of what it means for a network to be decentralized. 你能看到(除了排序服务外)并没有中心化的资源——在上例中,如果组织不贡献自己的peer节点, 网络N不会存在。这反映了这样一个事实,即除非组织贡献出组成 它的资源,否则网络从任何意义上都不存在,。此外,网络不依赖于任何一个组织——只要有一个组织留下,网络就会继续存在,无论其他 组织可能加入或离开。这正是网络去中心化的核心意义所在。

Applications in different organizations, as in the example above, may or may not be the same. That’s because it’s entirely up to an organization how its applications process their peers’ copies of the ledger. This means that both application and presentation logic may vary from organization to organization even though their respective peers host exactly the same ledger data. 正如上例所示,不同组织的应用程序,或许一样或许不同。这是因为程序如何处理节点的账本副本完 全取决于组织。这意味着即使不同组织的节点记录了完全相同的账本数据,它们的应用程序和呈现逻 辑也可能不同。

Applications either connect to peers in their organization, or peers in another organization, depending on the nature of the ledger interaction that’s required. For ledger-query interactions, applications typically connect to their own organization’s peers. For ledger-update interactions, we’ll see later why applications need to connect to peers in every organization that is required to endorse the ledger update. 应用程序要么连接到自身组织的peer节点,要么连接到其他组织的peer节点,这取决于所需的账本交 互的特性。对于账本查询交互,程序通常会连接到自身组织的节点上。至于账本更新交互,稍后我们 会看到为什么应用程序必须连接到每个需要用来对账本更新背书的组织的节点上。

Peers and Identity

peer节点与身份

Now that you’ve seen how peers from different organizations come together to form a blockchain network, it’s worth spending a few moments understanding how peers get assigned to organizations by their administrators. 现在你已经看到来自不同组织的peer节点是如何一起组成一个区块链网络的了,我们值得花几分钟 来理解这些节点怎样被管理员分配到各个组织。

Peers have an identity assigned to them via a digital certificate from a particular certificate authority. You can read lots more about how X.509 digital certificates work elsewhere in this guide, but for now think of a digital certificate as being like an ID card that provides lots of verifiable information about a peer. Each and every peer in the network is assigned a digital certificate by an administrator from its owning organization. peer节点具有通过来自特定证书机构的数字证书分配给它们的身份,在本指南的其他部分你可以 阅读更多关于X.509数字证书如何工作的信息,但现在请将数字证书看作一张提供大量关于节点的 可证实信息的身份证。网络中的每个peer节点都由管理员从其拥有的组织中分配数字证书 。 Peer9

When a peer connects to a channel, its digital certificate identifies its owning organization via a channel MSP. In this example, P1 and P2 have identities issued by CA1. Channel C determines from a policy in its channel configuration that identities from CA1 should be associated with Org1 using ORG1.MSP. Similarly, P3 and P4 are identified by ORG2.MSP as being part of Org2. 当一个节点接入通道时,它的数字证书就通过通道MSP标识了它所属的组织。在本例中,节点P1 和P2具有CA1颁发的身份。通道C根据其通道配置的策略确定来自CA1的身份标识应该使用ORG1.MSP 与Org1关联。相似地,P3和P4由ORG2.MSP认定为Org2的一部分。

Whenever a peer connects using a channel to a blockchain network, a policy in the channel configuration uses the peer’s identity to determine its rights. The mapping of identity to organization is provided by a component called a Membership Service Provider (MSP) – it determines how a peer gets assigned to a specific role in a particular organization and accordingly gains appropriate access to blockchain resources. Moreover, a peer can only be owned by a single organization, and is therefore associated with a single MSP. We’ll learn more about peer access control later in this topic, and there’s an entire topic on MSPs and access control policies elsewhere in this guide. But for now, think of an MSP as providing linkage between an individual identity and a particular organizational role in a blockchain network. 每当一个节点使用通道接入区块链网络,通道配置的策略就通过节点身份验证它的权限。身份到组织 的映射是由叫做成员资格服务提供者(MSP)的成分提供的——它决定了peer节点在特定组织中如何被分配 特定角色,并相应地获得区块链资源适当的访问权限,此外,一个节点只能被单个组织拥有,因此与单个MSP关联。本专题的稍后部分我们 将学习更多关于节点访问控制的内容,且本指南的其他部分有一整个关于MSP和接入控制策略的专题。 但现在,请将MSP看作在区块链网络中,为节点身份和特殊组织角色提供连接的工具。

And to digress for a moment, peers as well as everything that interacts with a blockchain network acquire their organizational identity from their digital certificate and an MSP. Peers, applications, end users, administrators, orderers must have an identity and an associated MSP if they want to interact with a blockchain network. We give a name to every entity that interacts with a blockchain network using an identity – a principal. You can learn lots more about principals and organizations elsewhere in this guide, but for now you know more than enough to continue your understanding of peers! 说句题外话,peer节点和所有其他与区块链网络交互的元素一样,从他们的数字证书和MSP中获取组 织身份。peer节点、应用程序、终端用户、管理员、排序节点如果要和区块链网络交互,都必须要有一 个身份以及一个关联的MSP。我们给所有通过身份和区块链网络交互的实体一个名字——主体。在本指南 的其他部分,你能学到更多关于主体和组织的内容,但现在你所知道的已经超过你继续理解peer节点所需!

Finally, note that it’s not really important where the peer is physically located – it could reside in the cloud, or in a data centre owned by one of the organizations, or on a local machine – it’s the identity associated with it that identifies it as owned by a particular organization. In our example above, P3 could be hosted in Org1’s data center, but as long as the digital certificate associated with it is issued by CA2, then it’s owned by Org2. 最后,请记住peer节点的物理位置并不重要——它可以存在于云端,或者在某个组织拥有的数据中心中, 亦或是在一个本地机器上——与它绑定的身份证实了它被某一特定的组织拥有。在上面的例子中,P3可 以存储在Org1的数据中心中,但只要与它绑定的数字证书是由CA2颁发的,它就属于Org2。

Peers and Orderers

peer节点与排序节点

We’ve seen that peers form a blockchain network, hosting ledgers and chaincodes contracts which can be queried and updated by peer-connected applications. However, the mechanism by which applications and peers interact with each other to ensure that every peer’s ledger is kept consistent is mediated by special nodes called orderers, and it’s these nodes to which we now turn our attention. 我们已经看到peer节点组成区块链网络,保存账本和能被与节点相连的应用程序查询和更新的链码合同。 然而,应用程序和peer节点相互作用以确保每个peer节点的账本保持一致的机制是由一种称为排序节点的特 殊节点所调节的,排序节点也正是我们正要关注的。

An update transaction is quite different to a query transaction because a single peer cannot, on its own, update the ledger – it requires the consent of other peers in the network. A peer requires other peers in the network to approve a ledger update before it can be applied to a peer’s local ledger. This process is called consensus – and takes much longer to complete than a query. But when all the peers required to approve the transaction do so, and the transaction is committed to the ledger, peers will notify their connected applications that the ledger has been updated. You’re about to be shown a lot more detail about how peers and orderers manage the consensus process in this section. 更新交易和查询交易颇有不同,因为单个peer节点无法独立更新账本——这要求网络中其他节点的同意。 在节点为本地账本执行更新之前,它需要网络中其他节点的支持。这个过程称为共识——它比查询要多花 很长时间。但是当所有需要的节点支持了该交易的更新,且交易被提交到账本上时,peer节点将通知与之相连的应用 程序账本已更新。本节接下来的部分,你将看到更多关于peer节点和排序节点如何管理共识进程的细节。

Specifically, applications that want to update the ledger are involved in a 3-phase process, which ensures that all the peers in a blockchain network keep their ledgers consistent with each other. In the first phase, applications work with a subset of endorsing peers, each of which provide an endorsement of the proposed ledger update to the application, but do not apply the proposed update to their copy of the ledger. In the second phase, these separate endorsements are collected together as transactions and packaged into blocks. In the final phase, these blocks are distributed back to every peer where each transaction is validated before being applied to that peer’s copy of the ledger. 具体而言,要更新账本的应用程序都涉及到一个三阶段的过程,这确保了区块链网络中各peer节点的账本 保持一致。在第一阶段,应用程序与一部分背书节点一起工作,且每一个背书节点都为程序提供所提议的账本 更新的背书。但并不将提议的更新应用到账本的副本上。在第二阶段,这些分散的背书将被作为交易 收集起来并打包成区块。在最后一个阶段,这些区块被分发回每个peer节点,每个交易被在被加入 这些节点的账本副本之前都将被验证。

As you will see, orderer nodes are central to this process – so let’s investigate in a little more detail how applications and peers use orderers to generate ledger updates that can be consistently applied to a distributed, replicated ledger. 正如你将见到的,排序节点是这个过程的核心——所以,让我们更详细地研究应用程序和peer节点是 如何使用排序节点来生成账本更新,且这些更新可以一致应用到分布式的复制账本上的。

Phase 1: Proposal
阶段1:提案

Phase 1 of the transaction workflow involves an interaction between an application and a set of peers – it does not involve orderers. Phase 1 is only concerned with an application asking different organizations’ endorsing peers to agree to the results of the proposed chaincode invocation. 交易流程的第一阶段涉及应用程序和一组peer节点之间的交互——并不涉及排序节点。阶段1仅涉及一个 应用程序,该程序请求不同组织的背书节点认同所提议的链码调用的结果。

To start phase 1, applications generate a transaction proposal which they send to each of the required set of peers for endorsement. Each peer then independently executes a chaincode using the transaction proposal to generate a transaction proposal response. It does not apply this update to the ledger, but rather the peer signs it and returns to the application. Once the application has received a sufficient number of signed proposal responses, the first phase of the transaction flow is complete. Let’s examine this phase in a little more detail. 阶段1 开始时,应用程序生成一个交易提案,并将其发送到每组必要的peer节点上请求背书。 每个peer节点独立地使用该提案执行链码以生成提案响应。Peer节点不会将更新应用到账本上,而是对其签名并反馈给应用程序。一旦程序接收到足够数量的签名响应,交易的第一阶段就 完成了。让我们更详细地检查这个阶段。

Peer10

Transaction proposals are independently executed by peers who return endorsed proposal responses. In this example, application A1 generates transaction T1 proposal P which it sends to both peer P1 and peer P2 on channel C. P1 executes S1 using transaction T1 proposal P generating transaction T1 response R1 which it endorses with E1. Independently, P2 executes S1 using transaction T1 proposal P generating transaction T1 response R2 which it endorses with E2. Application A1 receives two endorsed responses for transaction T1, namely E1 and E2. peer节点独立处理交易提案并反馈背书响应。在本例中,应用程序A1生成交易T1的提案P并在通道 C中将其发送给peer节点P1和P2。P1使用交易T1的提案P执行S1来生成以E1背书的响应R1。相独立地, P2使用交易T1的提案P执行S1来生成以E2背书的响应R2。应用程序A1接收到两个对交易T1的背书响应, 即E1和E2。

Initially, a set of peers are chosen by the application to generate a set of proposed ledger updates. Which peers are chosen by the application? Well, that depends on the endorsement policy (defined for a chaincode), which defines the set of organizations that need to endorse a proposed ledger change before it can be accepted by the network. This is literally what it means to achieve consensus – every organization who matters must have endorsed the proposed ledger change before it will be accepted onto any peer’s ledger. 在一开始,应用程序选择一组peer节点来生成一组提议的账本更新。程序选择哪些节点取决于 背书策略(由一段链码定义),它定义了一组组织,这些组织在被网络接受前需要背书一个提议的账本改变。 这就是字面上达成共识的含义——每个重要的组织在被接受到任何peer节点的账本上之前,都需要对提议的 账本更新背书。

A peer endorses a proposal response by adding its digital signature, and signing the entire payload using its private key. This endorsement can be subsequently used to prove that this organization’s peer generated a particular response. In our example, if peer P1 is owned by organization Org1, endorsement E1 corresponds to a digital proof that “Transaction T1 response R1 on ledger L1 has been provided by Org1’s peer P1!”. 一个节点通过添加其数字签名来对提案响应背书,并使用其私钥为整个提案的内容提供签名。该背书可以随后用来 证明这个组织的peer节点产生了特定的响应。在我们的例子中,如果peer节点P1由组织Org1拥有,则 背书节点E1对应一个数字证明,即“账本L1上对交易T1的响应R1已经由Org1的节点P1提供!”。

Phase 1 ends when the application receives signed proposal responses from sufficient peers. We note that different peers can return different and therefore inconsistent transaction responses to the application for the same transaction proposal. It might simply be that the result was generated a different time on different peers with ledgers at different states – in which case an application can simply request a more up-to-date proposal response. Less likely, but much more seriously, results might be different because the chaincode is non-deterministic. Non-determinism is the enemy of chaincodes and ledgers and if it occurs it indicates a serious problem with the proposed transaction, as inconsistent results cannot, obviously, be applied to ledgers.An individual peer cannot know that their transaction result is non-deterministic – transaction responses must be gathered together for comparison before non-determinism can be detected. (Strictly speaking, even this is not enough, but we defer this discussion to the transaction topic, where non-determinism is discussed in detail.) 当应用程序从足够数量的peer节点接收已签名的提案响应时,阶段1 结束。我们注意到,不同的节点可以有 不同的反馈,所以对于同一交易提案,会有不一致的交易响应反馈给程序。简单地说是因为该结果是账本处于不同状态下,不同节点在不同时间产生的 ——这种情况下,应用程序可以简单地请求一个最新的提议响应。 结果不同的原因也可能是因为链码是非确定性的,尽管这不太可能,但却更加严重。非确定性是链码和账本的威胁,如果发 生则意味着所提出的交易存在严重的问题,因为不一致的结果显然不能应用到账本上。单个peer节点不能知 道他们的交易结果是非确定性的——在非确定性能被检测到之前,交易响应必须被收集起来作比较。(严格说, 这还不够,不过我们将这个讨论推迟到交易专题上,届时我们将对非确定性做更详细的讨论)。

At the end of phase 1, the application is free to discard inconsistent transaction responses if it wishes to do so, effectively terminating the transaction workflow early. We’ll see later that if an application tries to use an inconsistent set of transaction responses to update the ledger, it will be rejected. 在阶段1末尾,应用程序可以自由丢弃不一致的交易响应,尽早有效地终止交易流程。稍后我们 会看到,如果应用程序试图用一组不一致的交易响应来更新账本,它将被拒绝。

Phase 2: Packaging
阶段2:打包

The second phase of the transaction workflow is the packaging phase. The orderer is pivotal to this process – it receives transactions containing endorsed transaction proposal responses from many applications. It orders each transaction relative to other transactions, and packages batches of transactions into blocks ready for distribution back to all peers connected to the orderer, including the original endorsing peers. 交易流程的第二阶段是打包。排序节点对该过程至关重要——它负责接收交易,这些交易包含着来自 许多应用程序的已背书的提案响应。它给每一个交易排序,并将这些交易打包 成区块,为分发回与排序节点相连的所有节点做准备,包括初始的背书节点。

Peer11

The first role of an orderer node is to package proposed ledger updates. In this example, application A1 sends a transaction T1 endorsed by E1 and E2 to the orderer O1. In parallel, Application A2 sends transaction T2 endorsed by E1 to the orderer O1. O1 packages transaction T1 from application A1 and transaction T2 from application A2 together with other transactions from other applications in the network into block B2. We can see that in B2, the transaction order is T1,T2,T3,T4,T6,T5 – which may not be the order in which these transactions arrived at the orderer node! (This example shows a very simplified orderer configuration.) 排序节点扮演的第一个角色就是打包提议的账本更新。在本例中,应用程序A1向排序节点01发送由 E1和E2背书的交易T1,相应地,程序A2向排序节点01发送由E1背书的交易T2,O1将来自程序A1的交易T1和 来自程序A2的交易T2与来自网络中的其他交易一起打包成区块B2,交易的顺序是T1,T2,T3,T4,T6, T5——这可能与这些交易到达排序节点的顺序不同!(本例展示了一个非常简单的排序节点配置。)

An orderer receives proposed ledger updates concurrently from many different applications in the network on a particular channel. Its job is to arrange these proposed updates into a well-defined sequence, and package them into blocks for subsequent distribution. These blocks will become the blocks of the blockchain! Once an orderer has generated a block of the desired size, or after a maximum elapsed time, it will be sent to all peers connected to it on a particular channel. We’ll see how this block is processed in phase 3. 在一个具体通道中,一个排序节点同时接收来自许多不同程序提议的账本更新。它的任务是为这些更 新安排明确的顺序,并将它们打包成区块以备随后的分发。这些打包后的账本更新就会成为区块链中 的区块!一旦一个排序节点生成了大小符合要求的区块,或者经过了最长时间之后,它会被送到特定通道中 所有与之相连的peer节点上。我们在阶段3会看到这些区块怎样被处理。

It’s worth noting that the sequencing of transactions in a block is not necessarily the same as the order of arrival of transactions at the orderer! Transactions can be packaged in any order into a block, and it’s this sequence that becomes the order of execution. What’s important is that there is a strict order, rather than what that order is. 值得注意的是区块中交易的顺序不必与其到达排序节点时一致!交易在区块中可以被打包成任意顺序, 这个顺序将是排序节点执行的顺序。重要的是一个严格的顺序,而顺序是怎样并不重要。

This strict ordering of transactions within blocks makes Hyperledger Fabric a little different to some other blockchains where the same transaction can be packaged into multiple different blocks. In Hyperledger Fabric, this cannot happen – the blocks generated by a collection of orderers are said to be final because once a transaction has been written to a block, its position in the ledger is immutably assured. Hyperledger Fabric’s finality means that a disastrous occurrence known as a ledger fork cannot occur. Once transactions are captured in a block, history cannot be be rewritten for that transaction at a future point in time. 区块中对交易的严格排序使Hyperledger Fabric与其他能将同一交易打包成多种区块的区块链略有 不同。在Hyperledger Fabric中,这不会发生——由一组排序节点生成的区块被称为是最终区块,因为 一旦交易被写入区块,其在账本中的位置将被确定,且不可更改。Hyperledger Fabric的最终确定性 意味着灾难性的账本分叉不会发生。一旦交易被写入区块,就不能在将来重写历史记录。

We can see also see that whereas peers host the ledger and chaincodes, orderers most definitely do not. Every transaction that arrives at an orderer is mechanically packaged in a block – the orderer makes no judgement as to the value of a transaction, it simply packages it. That’s an important behavior of Hyperledger Fabric – all transactions are marshalled into in a strict order – transactions are never dropped or de-prioritized. 我们也能看到尽管peer节点保存了账本和链码,绝大多数排序节点却没有。每一个到达排序节点的交易都在区块 中被机械化地打包——排序节点并不对交易的值进行判断,它只是简单地打包。这是Hyperledger Fabric 中很重要的操作——所有交易都被排成严格的顺序——交易永远不会被丢弃或取消优先级。

At the end of phase 2, we see that orderers have been responsible for the simple but vital processes of collecting proposed transaction updates, ordering them, packaging them into blocks, ready for distribution. 在阶段2末尾,我们看到了由排序节点负责的简单但却关键的任务,即收集提议的交易更新,对其排序, 打包成区块,并为分发做准备。

Phase 3: Validation
阶段3 :确认

The final phase of the transaction workflow involves the distribution and subsequent validation of blocks from the orderer to the peers, where they can be applied to the ledger. Specifically, at each peer, every transaction within a block is validated to ensure that it has been consistently endorsed by all relevant organizations before it is applied to the ledger. Failed transactions are retained for audit, but are not applied to the ledger. 交易流程的最终阶段涉及对交易的分发以及后续从排序节点到peer节点的区块确认,在这里他们可以 被应用到账本上。具体说,在区块中每一个peer节点上的交易都被确认以确保它在被应用到账本之前 已经所有相关的组织一致地背书过。失败的交易将被保存用于审计,不应用到账本上。

Peer12

The second role of an orderer node is to distribute blocks to peers. In this example, orderer O1 distributes block B2 to peer P1 and peer P2. Peer P1 processes block B2, resulting in a new block being added to ledger L1 on P1. In parallel, peer P2 processes block B2, resulting in a new block being added to ledger L1 on P2. Once this process is complete, the ledger L1 has been consistently updated on peers P1 and P2, and each may inform connected applications that the transaction has been processed. 排序节点扮演的第二个角色是将区块分发到peer节点上。在本例中,排序节点O1将区块B2分发到 peer节点P1和P2上。P1对区块B2进行处理,生成一个全新的区块并加入P1自身的账本L1上。同样 地,peer节点P2处理区块B2,生成新区块并应用到自身的账本L1上。一旦该进程完成,账本L1就 被一致地更新到节点P1和P2上,且每个节点都可能通知应用程序交易已被处理完成。

Phase 3 begins with the orderer distributing blocks to all peers connected to it. Peers are connected to orderers on channels such that when a new block is generated, all of the peers connected to the orderer will be sent a copy of the new block. Each peer will process this block independently, but in exactly the same way as every other peer on the channel. In this way, we’ll see that the ledger can be kept consistent. It’s also worth noting that not every peer needs to be connected to an orderer – peers can cascade blocks to other peers using the gossip protocol, who also can process them independently. But let’s leave that discussion to another time! 阶段3以排序节点将区块分发到所有与之相连的peer节点上开始。通道中的peer节点与排序节点相连, 所以每当生成一个新区块,新区块的副本会被发送到所有与排序节点相连的peer节点上。每个peer节点 会独立地处理这个区块,但其方式与通道中其他每个节点都相同。我们将看到使用这种方式可以使账本 保持一致。还值得注意的是并非所有peer节点都需要与一个排序节点相连——peer节点可使用gossip 数据传输协议将区块串联到其他区块上,且各节点能对其进行独立处理。这些我们下次再讨论!

Upon receipt of a block, a peer will process each transaction in the sequence in which it appears in the block. For every transaction, each peer will verify that the transaction has been endorsed by the required organizations according to the endorsement policy of the chaincode which generated the transaction. For example, some transactions may only need to be endorsed by a single organization, whereas others may require multiple endorsements before they are considered valid. This process of validation verifies that all relevant organizations have generated the same outcome or result. 接收到一个区块后,peer节点会按照交易出现在区块中的顺序对其进行处理。对于每一个交易, 每个peer节点都会根据生成交易的链码的背书策略,验证其已被所需组织背书。比如,一些交易可能 只需要被单个组织背书,而其他交易在被认定为有效之前可能需要多个组织的背书。该确认过程就验 证了所有相关的组织已生成同样的结果。

If a transaction has been endorsed correctly, the peer will attempt to apply it to the ledger. To do this, a peer must perform a ledger consistency check to verify that the current state of the ledger is compatible with the state of the ledger when the proposed update was generated. This may not always be possible, even when the transaction has been fully endorsed. For example, another transaction may have updated the same asset in the ledger such that the transaction update is no longer valid and therefore can no longer be applied. In this way each peer’s copy of the ledger is kept consistent across the network because they each follow the same rules for validation. 如果交易被正确背书,peer节点将试图将其应用到账本上。为此,节点必须执行账本一致性检查以确认 当前账本状态与生成提议更新时的状态兼容。即使当交易已被完整背书,这一点也并非总能达到。比如, 另一个交易可能在账本上更新了同样的资金导致该交易更新不再有效,因此不能再应用。这种方式下整个 网络中每个peer节点的账本副本都保持一致,因为它们遵循同样的验证规则。

After a peer has successfully validated each individual transaction, it updates the ledger. Failed transactions are not applied to the ledger, but they are retained for audit purposes, as are successful transactions. This means that peer blocks are almost exactly the same as the blocks received from the orderer, except for a valid or invalid indicator on each transaction in the block. 一个节点成功验证每个交易后,它就更新了账本。失败的交易不会被应用到账本,但会和成功的 交易一样被保存以作审计之用。这意味着,除了区块中每个交易的有效或无效指示符外,peer节点的区块 与从排序节点接收到的几乎完全相同。

We also note that phase 3 does not require the running of chaincodes – this is only done in phase 1, and that’s important. It means that chaincodes only have to be available on endorsing nodes, rather than throughout the blockchain network. This is often helpful as it keeps the logic of the chaincode confidential to endorsing organizations. This is in contrast to the output of the chaincodes (the transaction proposal responses) which are shared to every peer in the channel, whether or not they endorsed the transaction. This specialization of endorsing peers is designed to help scalability. 我们也注意到阶段3并未要求执行链码——这只在阶段1 完成,且十分重要。它意味着链码只需在背书 节点上有效,而不必贯穿整个网络都有效。这对链码保持其逻辑对于背书组织机密十分有益。这与链码的 输出(交易提议响应)相反,它被共享到通道内的每个peer节点,不管它们是否已对交易背书。这种 背书节点的特殊化的设计旨在实现可扩展性。

Finally, every time a block is committed to to a peer’s ledger, that peer generates an appropriate event. Block events include the full block content, while block transaction events include summary information only, such as whether each transaction in the block has been validated or invalidated. Chaincode events that the chaincode execution has produced can also be published at this time. Applications can register for these event types so that they can be notified when they occur. These notifications conclude the third and final phase of the transaction workflow. 最后,每当一个区块被提交到peer节点的账本上,该peer节点就生成一个相适应的事件区块事件包括 了整个区块内容,而区块交易事件只包括信息的摘要,比如是否每个区块中的交易都被确认与否。链码执 行所产生的链码事件也可以在此时发布,应用程序可以注册这些事件类型以便在事件发生时收到通知。 这些通知将结束交易流程的第三即最后阶段。

In summary, phase 3 sees the blocks which are generated by the orderer consistently applied to the ledger. The strict ordering of transactions into blocks allows each peer to validate that transaction updates are consistently applied across the blockchain network. 总而言之,在阶段3我们看到了排序节点生成的区块被一致地应用到账本上。交易严格的排序规则允许每 个peer节点验证交易更新是否被一致地应用到整个区块链网络上。

Orderers and Consensus
排序节点与共识

This entire transaction workflow process is called consensus because all peers have reached agreement on the order and content of transactions, in a process that is mediated by orderers. Consensus is a multi-step process and applications are only notified of ledger updates when the process is complete – which may happen at slightly different times on different peers. 这整个交易流进程被称为共识,因为所有的peer节点对交易的顺序和内容都达成了共识。在一个由排序 节点介导的过程中,共识是一个多步骤的过程且应用程序只在该过程完成时才收到账本更新的通知——在 不同的节点上,可能收到更新的时间有细微差别。

We will discuss orderers in a lot more detail in a future orderer topic, but for now, think of orderers as nodes which collect and distribute proposed ledger updates from applications for peers to validate and include on the ledger. 在将来排序节点的专题中我们会更多地讨论其细节,但现在,请将排序节点视作这样一种节点,它收集并 分发来自应用程序提议的账本更新以便peer节点对其验证并将其应用在账本上。

That’s it! We’ve now finished our tour of peers and the other components that they relate to in Hyperledger Fabric. We’ve seen that peers are in many ways the most fundamental element – they form the network, host chaincodes and the ledger, handle transaction proposals and responses, and keep the ledger up-to-date by consistently applying transaction updates to it. 就是这些了!现在我们已经完成了对于peer节点和其它在Hyperledger Fabric中与之相关成分的认识, 我们发现,peer节点从许多方面看都是最基本的元素——它们组成了网络,保存链码和账本,处理交易的 提案和相应,并将交易一致地应用到账本上以保持账本最新。

Ledger - 账本

The ledger is the sequenced, tamper-resistant record of all state transitions. State transitions are a result of chaincode invocations (“transactions”) submitted by participating parties. Each transaction results in a set of asset key-value pairs that are committed to the ledger as creates, updates, or deletes.

账本是所有状态转换的有序的、防篡改的记录。状态转换是链码执行(即交易)的结果,这些交易由多方进行提交。每个交易会产生一组资产键值对,这些键值对以“创建”、“更新”或者“删除” 的方式提交给账本。

The ledger is comprised of a blockchain (‘chain’) to store the immutable, sequenced record in blocks, as well as a state database to maintain current state. There is one ledger per channel. Each peer maintains a copy of the ledger for each channel of which they are a member.

账本由一个区块链(链)构成,并将不可变的、有序的记录存放在区块中;同时包含一个状态数据库来记录当前的状态。每个通道中各有一个账本。各个节点对于它所属的每个通道,都会保存一份该通道的账本副本。

Chain - 链

The chain is a transaction log, structured as hash-linked blocks, where each block contains a sequence of N transactions. The block header includes a hash of the block’s transactions, as well as a hash of the prior block’s header. In this way, all transactions on the ledger are sequenced and cryptographically linked together. In other words, it is not possible to tamper with the ledger data, without breaking the hash links. The hash of the latest block represents every transaction that has come before, making it possible to ensure that all peers are in a consistent and trusted state.

链是一个交易日志,它由哈希值链接的区块构造而成,每个区块包含 N 个有序的交易。区块头中包含了本区块内所有交易的总哈希值,以及上一个区块头的哈希值。通过这种方式,账本中的所有交易都以有序的、加密的形式串联在了一起。换而言之,在不破坏哈希链的情况下,是无法篡改账本数据的。最新区块的哈希是之前每一笔交易的体现,从而可以保证所有的节点处于一致的可信任的状态。

The chain is stored on the peer file system (either local or attached storage), efficiently supporting the append-only nature of the blockchain workload.

链被存放于对等节点的文件系统中(本地的或者挂载的),有效地支持着区块链工作量只追加的特性。

State Database - 状态数据库

The ledger’s current state data represents the latest values for all keys ever included in the chain transaction log. Since current state represents all latest key values known to the channel, it is sometimes referred to as World State.

账本的当前状态信息呈现的是链交易日志中记录过的所有键的最新值。由于当前状态表示的是通道已知的所有键的最新值,因此也常被称作世界状态。

Chaincode invocations execute transactions against the current state data. To make these chaincode interactions extremely efficient, the latest values of all keys are stored in a state database. The state database is simply an indexed view into the chain’s transaction log, it can therefore be regenerated from the chain at any time. The state database will automatically get recovered (or generated if needed) upon peer startup, before transactions are accepted.

链码调用基于当前的状态数据执行交易。为了使链码调用高效运行,所有键的最新值被存储在状态数据库中。状态数据库是链的交易日志的索引视图,因此它可以随时从链中重新导出。节点启动的时候,在接受交易之前,状态数据库将被自动恢复(或者根据需要产生)。

State database options include LevelDB and CouchDB. LevelDB is the default state database embedded in the peer process and stores chaincode data as key/value pairs. CouchDB is an optional alternative external state database that provides addition query support when your chaincode data is modeled as JSON, permitting rich queries of the JSON content. See CouchDB as the State Database for more information on CouchDB.

状态数据库的可选项包括 LevelDB 和 CouchDB。LevelDB 是对等节点中内嵌的默认状态数据库,将链码数据以键值对的形式保存。 CouchDB 是一个可选的外部状态数据库,当链码数据以 JSON 格式建模时,CouchDB 可以提供额外的查询支持,允许使用富查询的方式检索 JSON 内容。阅读 CouchDB as the State Database 获取更多的关于 CouchDB 的信息。

Transaction Flow - 交易流程

At a high level, the transaction flow consists of a transaction proposal sent by an application client to specific endorsing peers. The endorsing peers verify the client signature, and execute a chaincode function to simulate the transaction. The output is the chaincode results, a set of key/value versions that were read in the chaincode (read set), and the set of keys/values that were written in chaincode (write set). The proposal response gets sent back to the client along with an endorsement signature.

从总体看,交易流程包括了应用客户端发送交易提案给背书节点。背书节点验证客户端的签名,然后执行链码来模拟交易。产生的输出就是链码结果,一组链码读取的键值版本(读集合),和一组被写入链码的键值集合(写集合)。交易提案的响应被发送回客户端,同时包含了背书签名。

The client assembles the endorsements into a transaction payload and broadcasts it to an ordering service. The ordering service delivers ordered transactions as blocks to all peers on a channel.

客户端汇总所有的背书到一个交易有效载荷中,并将它广播到排序服务。排序服务将排好序的交易放入区块并发送到通道内的所有对等节点。

Before committal, peers will validate the transactions. First, they will check the endorsement policy to ensure that the correct allotment of the specified peers have signed the results, and they will authenticate the signatures against the transaction payload.

在提交之前,节点们会验证交易。首先它们会检查背书策略来保证足够的指定节点正确地对结果进行了签名,并且会认证交易有效载荷中的签名。

Secondly, peers will perform a versioning check against the transaction read set, to ensure data integrity and protect against threats such as double-spending. Hyperledger Fabric has concurrency control whereby transactions execute in parallel (by endorsers) to increase throughput, and upon commit (by all peers) each transaction is verified to ensure that no other transaction has modified data it has read. In other words, it ensures that the data that was read during chaincode execution has not changed since execution (endorsement) time, and therefore the execution results are still valid and can be committed to the ledger state database. If the data that was read has been changed by another transaction, then the transaction in the block is marked as invalid and is not applied to the ledger state database. The client application is alerted, and can handle the error or retry as appropriate.

其次,节点们会对交易的读集合进行版本检查,从而保证数据的一致性并防范一些攻击,比如双花。Hyperledger Fabric 拥有并发控制,从而交易可以(被背书节点)并行运行来提高吞吐量,而且当交易(被所有对等节点)提交时,每个交易都会被验证来保证它所读取的数据没有被其他交易更改。换言之,它确保链码执行期间所读取的数据从执行(背书)开始后没有变动。如果读取的数据被其他交易改动了,那么区块中的交易将被标记成无效的,也不会被应用到账本状态数据库。客户端应用会收到提醒,从而进行纠错或适当重试。

See the Transaction Flow, Read-Write set semantics - 读写集语义, and CouchDB as the State Database topics for a deeper dive on transaction structure, concurrency control, and the state DB.

要进一步了解交易的结构、并发控制和状态数据库的相关内容,可以参考 Transaction FlowRead-Write set semantics - 读写集语义CouchDB as the State Database

Use Cases

用例

The Hyperledger Requirements WG is documenting a number of blockchain use cases and maintaining an inventory here.

超级帐本要求WG记录若干区块链案例,并维护用例库在 这里

Tutorials - 教程

We offer tutorials to get you started with Hyperledger Fabric. The first is oriented to the Hyperledger Fabric application developer, Writing Your First Application - 编写你的第一个应用. It takes you through the process of writing your first blockchain application for Hyperledger Fabric using the Hyperledger Fabric Node SDK.

我们提供了一些教程,方便你能开始使用 Hyperledger Fabric。第一份教程是面向 Hyperledger Fabric 应用开发者 的, 即 Writing Your First Application - 编写你的第一个应用 。该教程介绍了如何基于 Hyperledger Fabric Node SDK 来编写你的第一个区块链应用的完整过程。

The second tutorial is oriented towards the Hyperledger Fabric network operators, Building Your First Network - 构建你的第一个网络. This one walks you through the process of establishing a blockchain network using Hyperledger Fabric and provides a basic sample application to test it out.

第二份教程是针对 Hyperledger Fabric 网络运行人员的,即 Building Your First Network - 构建你的第一个网络。该教程介绍了使用 Hyperledger Fabric 构建一个区块链网络的完整过程,同时提供了一个基础的应用示例用于测试该网络。

There are also tutorials for updating your channel, Adding an Org to a Channel – 向通道添加组织, and upgrading your network to a later version of Hyperledger Fabric, Upgrading Your Network Components.

此外,还提供了其他的一些教程,包括如何更新你的通道 (Adding an Org to a Channel – 向通道添加组织)以及如何将你的网络升级到 Hyperledger Fabric 的新版本 (Upgrading Your Network Components)

Finally, we offer two chaincode tutorials. One oriented to developers, Chaincode for Developers 面向开发人员的链码指南, and the other oriented to operators, Chaincode for Operators.

最后,我们还提供了两份链码教程。一份是面向开发者的,即 Chaincode for Developers 面向开发人员的链码指南,另一份是面向网络运行人员的,即 Chaincode for Operators

注解

If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? - 依然遇到问题? page for some tips on where to find additional help.

如果你有一些本文档未提及的疑惑,或者在运行教程时遇到了任何问题,请查看 Still Have Questions? - 依然遇到问题? 页面获取更多的帮助信息。

Needs Review

Building Your First Network - 构建你的第一个网络

注解

These instructions have been verified to work against the latest stable Docker images and the pre-compiled setup utilities within the supplied tar file. If you run these commands with images or tools from the current master branch, it is possible that you will see configuration and panic errors.

本篇指南是针对稳定版(v1.1.0) Docker 镜像和预编译好的工具进行设计和校验。 如果你基于当前代码的 master 分支运行本篇指南提到的命令,有可能会遇到配置错误或者宕机等问题。

The build your first network (BYFN) scenario provisions a sample Hyperledger Fabric network consisting of two organizations, each maintaining two peer nodes, and a “solo” ordering service.

本方案 (“构建你的第一个网络”,BYFN) 提供了一个 Hyperledger Fabric 的示例网络,该网络包含两个机构 (Organization),每个机构各自拥有两个对等节点 (peer),并提供 “独自(solo)” 模式的排序服务 (Ordering Service)。

Install prerequisites - 安装依赖

Before we begin, if you haven’t already done so, you may wish to check that you have all the 预备知识 installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.

在开始前,请确认你已经按照 预备知识 的介绍,在你准备开发和运行 Hyperledger Fabric 的电脑上安装了所有的依赖软件。

You will also need to download and install the Hyperledger Fabric 示例. You will notice that there are a number of samples included in the fabric-samples repository. We will be using the first-network sample. Let’s open that sub-directory now.

你还需要下载和安装 Hyperledger Fabric 示例。你会注意到在 fabric-samples 项目中包含了一系列的示例。我们会使用 first-network 示例,下面请进入该示例的子目录:

cd fabric-samples/first-network

注解

The supplied commands in this documentation MUST be run from your first-network sub-directory of the fabric-samples repository clone. If you elect to run the commands from a different location, the various provided scripts will be unable to find the binaries.

本文档中提供的命令 必须fabric-samples 项目的 first-network 子目录下执行。如果你从其他位置执行命令,示例中提供的各种脚本将无法找到所需的可执行文件。

Want to run it now? - 是否已经迫不及待想要开始了?

We provide a fully annotated script - byfn.sh - that leverages these Docker images to quickly bootstrap a Hyperledger Fabric network comprised of 4 peers representing two different organizations, and an orderer node. It will also launch a container to run a scripted execution that will join peers to a channel, deploy and instantiate chaincode and drive execution of transactions against the deployed chaincode.

我们提供了一个包含完整注释的脚本 byfn.sh ,利用 Docker 镜像快速构建起一个 Hyperledger Fabric 网络,该网络包含 2 个机构下的共 4 个对等节点 (peer),以及 1 个排序服务节点 (orderer)。同时,我们还会启动一个容器来运行脚本,实现如下功能:将对等节点添加到通道 (channel)、部署和初始化链码 (chaincode) 以及执行已部署链码的交易。

Here’s the help text for the byfn.sh script:

如下是 byfn.sh 脚本的帮助说明

./byfn.sh --help
Usage:
byfn.sh up|down|restart|generate [-c <channel name>] [-t <timeout>] [-d <delay>] [-f <docker-compose-file>] [-s <dbtype>]
byfn.sh -h|--help (print this message 打印本消息)
  -m <mode> - one of 'up', 'down', 'restart' or 'generate'
    - 'up' - bring up the network with docker-compose up 使用 docker-compose up 命令启动本网络
    - 'down' - clear the network with docker-compose down 使用 docker-compose down 命令关闭本网络
    - 'restart' - restart the network 重启本网络
    - 'generate' - generate required certificates and genesis block 生成需要的证书文件和初始区块
  -c <channel name> - channel name to use (defaults to "mychannel") channel 名称 (默认是 "mychannel")
  -t <timeout> - CLI timeout duration in seconds (defaults to 10) 客户端超时时间 (默认是 10 秒)
  -d <delay> - delay duration in seconds (defaults to 3) 延时时间 (默认是 3 秒)
  -f <docker-compose-file> - specify which docker-compose file use (defaults to docker-compose-cli.yaml) 指定使用的 docker-compose 文件 (默认是 docker-compose-cli.yaml)
  -s <dbtype> - the database backend to use: goleveldb (default) or couchdb 所使用的数据库: goleveldb (默认) 或者 couchdb
  -l <language> - the chaincode language: golang (default) or node chaincode 编程语言: golang (默认) 或者 node
  -a - don't ask for confirmation before proceeding 在运行前无需确认

  Typically, one would first generate the required certificates and
  genesis block, then bring up the network. e.g.:

  一般情况下,可以先生成需要的证书文件和初始区块,随后启动整个网络。例如:

      byfn.sh -m generate -c mychannel
      byfn.sh -m up -c mychannel -s couchdb

If you choose not to supply a channel name, then the script will use a default name of mychannel. The CLI timeout parameter (specified with the -t flag) is an optional value; if you choose not to set it, then the CLI will give up on query requests made after the default setting of 10 seconds.

如果你没有指定通道名称,该脚本会使用默认的通道名称 mychannel。客户端超时时间 (由 -t 选项指定)是一个可选项,如果你没有设置它,客户端会在默认的 10 秒超时时间后,放弃查询请求。

Generate Network Artifacts - 生成网络配置工件

Ready to give it a go? Okay then! Execute the following command:

是否已经准备好开始了?好的,开始执行如下命令

./byfn.sh -m generate

You will see a brief description as to what will occur, along with a yes/no command line prompt. Respond with a y or hit the return key to execute the described action.

你会看到一个简短的描述,以及一个 yes/no 的提示。输入 y 及回车键,将会执行所描述的相应动作。

Generating certs and genesis block for with channel 'mychannel' and CLI timeout of '10'
Continue? [Y/n] y
proceeding ...
/Users/xxx/dev/fabric-samples/bin/cryptogen

##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
org1.example.com
2017-06-12 21:01:37.334 EDT [bccsp] GetDefault -> WARN 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
...

/Users/xxx/dev/fabric-samples/bin/configtxgen
##########################################################
#########  Generating Orderer Genesis block ##############
##########################################################
2017-06-12 21:01:37.558 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.562 EDT [msp] getMspConfig -> INFO 002 intermediate certs folder not found at [/Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts]. Skipping.: [stat /Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts: no such file or directory]
...
2017-06-12 21:01:37.588 EDT [common/configtx/tool] doOutputBlock -> INFO 00b Generating genesis block
2017-06-12 21:01:37.590 EDT [common/configtx/tool] doOutputBlock -> INFO 00c Writing genesis block

#################################################################
### Generating channel configuration transaction 'channel.tx' ###
#################################################################
2017-06-12 21:01:37.634 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.644 EDT [common/configtx/tool] doOutputChannelCreateTx -> INFO 002 Generating new channel configtx
2017-06-12 21:01:37.645 EDT [common/configtx/tool] doOutputChannelCreateTx -> INFO 003 Writing new channel tx

#################################################################
#######    Generating anchor peer update for Org1MSP   ##########
#################################################################
2017-06-12 21:01:37.674 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.678 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002 Generating anchor peer update
2017-06-12 21:01:37.679 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing anchor peer update

#################################################################
#######    Generating anchor peer update for Org2MSP   ##########
#################################################################
2017-06-12 21:01:37.700 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.704 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002 Generating anchor peer update
2017-06-12 21:01:37.704 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing anchor peer update

This first step generates all of the certificates and keys for our various network entities, the genesis block used to bootstrap the ordering service, and a collection of configuration transactions required to configure a Channel - 通道.

在第一步中,生成了如下文件:我们多个网络实体的证书和密钥、用于启动排序服务的 初始区块(genesis block) 以及配置 Channel - 通道 所需的一系列配置交易。

Bring Up the Network - 启动整个网络

Next, you can bring the network up with one of the following commands:

接下来,你可以使用如下命令,启动整个网络

./byfn.sh -m up

The above command will compile Golang chaincode images and spin up the corresponding containers. Go is the default chaincode language, however there is also support for Node.js chaincode. If you’d like to run through this tutorial with node chaincode, pass the following command instead:

上述命令会编译 Golang 链码镜像并启动相应的容器。Go 语言是默认的链码编程语言,同时还支持使用 Node.js 编写链码。如果你希望基于 node 链码运行本教程,使用如下的命令:

# we use the -l flag to specify the chaincode language
# forgoing the -l flag will default to Golang

./byfn.sh -m up -l node

注解

View the Hyperledger Fabric Shim documentation for more info on the node.js chaincode shim APIs.

阅读 Hyperledger Fabric Shim 文档,获取更多关于 node.js 链码 API 的信息。

Once again, you will be prompted as to whether you wish to continue or abort. Respond with a y or hit the return key:

和之前类似,会提示你是否继续或者终止。输入 y 以及回车键继续:

Starting with channel 'mychannel' and CLI timeout of '10'
Continue? [Y/n]
proceeding ...
Creating network "net_byfn" with the default driver
Creating peer0.org1.example.com
Creating peer1.org1.example.com
Creating peer0.org2.example.com
Creating orderer.example.com
Creating peer1.org2.example.com
Creating cli


 ____    _____      _      ____    _____
/ ___|  |_   _|    / \    |  _ \  |_   _|
\___ \    | |     / _ \   | |_) |   | |
 ___) |   | |    / ___ \  |  _ <    | |
|____/    |_|   /_/   \_\ |_| \_\   |_|

Channel name : mychannel
Creating channel...

The logs will continue from there. This will launch all of the containers, and then drive a complete end-to-end application scenario. Upon successful completion, it should report the following in your terminal window:

日志将从这里开始。随后会启动所有的容器,然后完成一个完整的端到端的应用场景。执行成功后,在终端窗口中会看到以下的内容:

Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query on peer1.org2 on channel 'mychannel' is successful =====================

===================== All GOOD, BYFN execution completed =====================


 _____   _   _   ____
| ____| | \ | | |  _ \
|  _|   |  \| | | | | |
| |___  | |\  | | |_| |
|_____| |_| \_| |____/

You can scroll through these logs to see the various transactions. If you don’t get this result, then jump down to the Troubleshooting - 疑难解答 section and let’s see whether we can help you discover what went wrong.

你可以滚动浏览这些日志,查看其中的各种交易。如果你没有看到上述内容,请查看 Troubleshooting - 疑难解答 章节,看看我们能否帮助你发现问题所在。

Bring Down the Network - 关闭整个网络

Finally, let’s bring it all down so we can explore the network setup one step at a time. The following will kill your containers, remove the crypto material and four artifacts, and delete the chaincode images from your Docker Registry:

最后,让我们关闭整个网络,以便我们可以逐步来学习网络的配置。接下来,将关闭你的容器,删除加密文件和四个配置工件,并从你的 Docker 镜像库中删除链码镜像:

./byfn.sh -m down

Once again, you will be prompted to continue, respond with a y or hit the return key:

再一次的,会提示你是否继续,输入 y 和回车键继续:

Stopping with channel 'mychannel' and CLI timeout of '10'
Continue? [Y/n] y
proceeding ...
WARNING: The CHANNEL_NAME variable is not set. Defaulting to a blank string.
WARNING: The TIMEOUT variable is not set. Defaulting to a blank string.
Removing network net_byfn
468aaa6201ed
...
Untagged: dev-peer1.org2.example.com-mycc-1.0:latest
Deleted: sha256:ed3230614e64e1c83e510c0c282e982d2b06d148b1c498bbdcc429e2b2531e91
...

If you’d like to learn more about the underlying tooling and bootstrap mechanics, continue reading. In these next sections we’ll walk through the various steps and requirements to build a fully-functional Hyperledger Fabric network.

如果你打算了解更多底层工具和引导机制的相关信息,请继续阅读下面的章节。在接下来的部分中,我们将介绍构建完整功能 Hyperledger Fabric 网络的各个步骤和相关要求。

注解

The manual steps outlined below assume that the CORE_LOGGING_LEVEL in the cli container is set to DEBUG. You can set this by modifying the docker-compose-cli.yaml file in the first-network directory. e.g.

随后的手动步骤假设 cli 容器中的 CORE_LOGGING_LEVEL 被设置为 DEBUG。 你可以通过修改 first-network 目录下的 docker-compose-cli.yaml 文件来设置这个参数,如下所示:

cli:
  container_name: cli
  image: hyperledger/fabric-tools:$IMAGE_TAG
  tty: true
  stdin_open: true
  environment:
    - GOPATH=/opt/gopath
    - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
    - CORE_LOGGING_LEVEL=DEBUG
    #- CORE_LOGGING_LEVEL=INFO

Crypto Generator - 加密文件的生成

We will use the cryptogen tool to generate the cryptographic material (x509 certs and signing keys) for our various network entities. These certificates are representative of identities, and they allow for sign/verify authentication to take place as our entities communicate and transact.

我们使用 cryptogen 工具来为各种网络实体生成加密文件 (x509 证书和密钥)。这些证书代表了身份的标示,网络实体在进行通信和交易的时候,会利用这些证书来进行签名和校验身份。

How does it work? - 它是如何工作的?

Cryptogen consumes a file - crypto-config.yaml - that contains the network topology and allows us to generate a set of certificates and keys for both the Organizations and the components that belong to those Organizations. Each Organization is provisioned a unique root certificate (ca-cert) that binds specific components (peers and orderers) to that Org. By assigning each Organization a unique CA certificate, we are mimicking a typical network where a participating Member - 成员 would use its own Certificate Authority. Transactions and communications within Hyperledger Fabric are signed by an entity’s private key (keystore), and then verified by means of a public key (signcerts).

Cryptogen 使用包含了网络拓扑结构的 crypto-config.yaml 配置文件,可以为机构和属于这些机构的组件生成一组证书和密钥。每个机构都配备了一个唯一的根证书( ca-cert ),并将指定的组件(对等节点和排序服务节点)绑定到该机构上。通过为每个机构分配一个唯一的 CA 证书,我们正在模拟一个典型的网络,其中每个指定的 Member - 成员 将使用自己的 CA 证书颁发机构。Hyperledger Fabric 中的交易和通信,都将由实体的密钥( keystore )进行签名,然后通过公钥( signcerts )进行校验。

You will notice a count variable within this file. We use this to specify the number of peers per Organization; in our case there are two peers per Org. We won’t delve into the minutiae of x.509 certificates and public key infrastructure right now. If you’re interested, you can peruse these topics on your own time.

你会注意到该文件中的 count 变量。我们用这个变量来指定每个机构中对等节点的数量。在我们的示例中,每个机构有两个对等节点。我们现在不会深入研究 x.509 证书和公钥基础设施 的细节。如果你有兴趣,可以自己找时间来仔细阅读这些资料。

Before running the tool, let’s take a quick look at a snippet from the crypto-config.yaml. Pay specific attention to the “Name”, “Domain” and “Specs” parameters under the OrdererOrgs header:

在运行该工具之前,让我们快速浏览一下 crypto-config.yaml 中的部分配置。请特别注意 OrdererOrgs 标题下的 “Name” , “Domain” 和 “Specs” 参数:

OrdererOrgs:
#---------------------------------------------------------
# Orderer
# --------------------------------------------------------
- Name: Orderer
  Domain: example.com
  CA:
      Country: US
      Province: California
      Locality: San Francisco
  #   OrganizationalUnit: Hyperledger Fabric
  #   StreetAddress: address for org # default nil
  #   PostalCode: postalCode for org # default nil
  # ------------------------------------------------------
  # "Specs" - See PeerOrgs below for complete description
# -----------------------------------------------------
  Specs:
    - Hostname: orderer
# -------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
 # ------------------------------------------------------
PeerOrgs:
# -----------------------------------------------------
# Org1
# ----------------------------------------------------
- Name: Org1
  Domain: org1.example.com
  EnableNodeOUs: true

The naming convention for a network entity is as follows - “{{.Hostname}}.{{.Domain}}”. So using our ordering node as a reference point, we are left with an ordering node named - orderer.example.com that is tied to an MSP ID of Orderer. This file contains extensive documentation on the definitions and syntax. You can also refer to the Membership Service Providers (MSP) documentation for a deeper dive on MSP.

网络实体的命名约定如下 - “{{.Hostname}}.{{.Domain}}” 。因此,以我们的排序服务节点为例,我们设置了一个名为 - orderer.example.com 的排序服务节点,该排序服务节点绑定到名为 Orderer 的 MSP ID 上。本文档还包含了有关定义和语法的大量描述,你也可以参阅 Membership Service Providers (MSP) 文档,以深入了解 MSP。

After we run the cryptogen tool, the generated certificates and keys will be saved to a folder titled crypto-config.

运行 cryptogen 工具后,生成的证书和密钥将被保存到 crypto-config 文件夹中。

Configuration Transaction Generator - 配置交易的生成

The configtxgen tool is used to create four configuration artifacts:

configtxgen 工具用于创建以下四个配置工件:

  • orderer genesis block,
  • channel configuration transaction,
  • and two anchor peer transactions - one for each Peer Org.
  • 排序服务节点的 初始区块(genesis block)
  • 通道的 配置交易(configuration transaction)
  • 以及两笔 锚节点交易(anchor peer transactions) - 每笔对应一个机构

Please see configtxgen for a complete description of this tool’s functionality.

请参阅 configtxgen 获取本工具功能的完整描述。

The orderer block is the Genesis Block - 初始区块 for the ordering service, and the channel configuration transaction file is broadcast to the orderer at Channel - 通道 creation time. The anchor peer transactions, as the name might suggest, specify each Org’s Anchor Peer - 锚节点 on this channel.

排序服务节点区块是排序服务的 Genesis Block - 初始区块 ,通道配置交易在创建 Channel - 通道 时广播给排序服务节点。 锚节点交易(正如名字所描述的一样)指定了通道中每个机构的 Anchor Peer - 锚节点

How does it work? - 它是如何工作的?

Configtxgen consumes a file - configtx.yaml - that contains the definitions for the sample network. There are three members - one Orderer Org (OrdererOrg) and two Peer Orgs (Org1 & Org2) each managing and maintaining two peer nodes. This file also specifies a consortium - SampleConsortium - consisting of our two Peer Orgs. Pay specific attention to the “Profiles” section at the top of this file. You will notice that we have two unique headers. One for the orderer genesis block - TwoOrgsOrdererGenesis - and one for our channel - TwoOrgsChannel.

Configtxgen 使用包含了示例网络定义的配置文件 configtx.yaml。该文件中定义了示例网络的三个成员 - 一个排序服务节点机构 (OrdererOrg) 和两个对等节点机构 (Org1 & Org2),其中每个对等节点机构管理和维护两个对等节点。该文件还指定了由两个对等节点机构组成的联盟 SampleConsortium。请特别注意文件顶部的 “Profiles” 部分。你会注意到有两个特别的部分,一个是排序节点初始区块 TwoOrgsOrdererGenesis,另一个是通道配置 TwoOrgsChannel

These headers are important, as we will pass them in as arguments when we create our artifacts.

上述两个配置参数很重要,在创建配置工件时,将会把它们作为参数传入。

注解

Notice that our SampleConsortium is defined in the system-level profile and then referenced by our channel-level profile. Channels exist within the purview of a consortium, and all consortia must be defined in the scope of the network at large.

请注意,我们的 SampleConsortium 是在系统级配置文件中定义的,然后在通道级配置文件中被引用。通道存在于一个联盟的范围内,所有联盟都必须在整个网络范围内进行界定。

This file also contains two additional specifications that are worth noting. Firstly, we specify the anchor peers for each Peer Org (peer0.org1.example.com & peer0.org2.example.com). Secondly, we point to the location of the MSP directory for each member, in turn allowing us to store the root certificates for each Org in the orderer genesis block. This is a critical concept. Now any network entity communicating with the ordering service can have its digital signature verified.

该文件还有两处值得引起注意。第一,我们为每个对等节点机构指定了锚节点(anchor peer)。第二,我们为每个成员指定了 MSP 目录的位置,这使得我们可以在排序服务节点初始区块中保存每个机构的根证书。这是一个非常重要的概念,这样任何与排序服务通信的网络实体都可以校验数字签名了。

Run the tools - 运行工具

You can manually generate the certificates/keys and the various configuration artifacts using the configtxgen and cryptogen commands. Alternately, you could try to adapt the byfn.sh script to accomplish your objectives.

你可以使用 configtxgencryptogen 命令手动生成证书、密钥以及各种配置工件。此外,你还可以尝试修改 byfn.sh 脚本来达到同样的目的。

Manually generate the artifacts - 手动生成

You can refer to the generateCerts function in the byfn.sh script for the commands necessary to generate the certificates that will be used for your network configuration as defined in the crypto-config.yaml file. However, for the sake of convenience, we will also provide a reference here.

你可以参考 byfn.sh 脚本中 generateCerts 函数中的命令,生成 crypto-config.yaml 文件中所使用到的证书文件。当然,为方便起见,我们这里也提供了一个参考方法。

First let’s run the cryptogen tool. Our binary is in the bin directory, so we need to provide the relative path to where the tool resides.

首先,运行 cryptogen 工具。我们的可执行文件在 bin 目录下,所以我们需要提供指向工具的相对路径。

../bin/cryptogen generate --config=./crypto-config.yaml

You should see the following in your terminal:

你会在终端中看到如下信息:

org1.example.com
org2.example.com

The certs and keys (i.e. the MSP material) will be output into a directory - crypto-config - at the root of the first-network directory.

证书和密钥(例如 MSP 材料)等文件生成后,位于 first-network 目录的 crypto-config 子目录下。

Next, we need to tell the configtxgen tool where to look for the configtx.yaml file that it needs to ingest. We will tell it look in our present working directory:

随后,我们需要告诉 configtxgen 工具,在哪个目录下查找它所需要的配置文件 configtx.yaml 文件。我们会告诉它在当前的目录中查找:

export FABRIC_CFG_PATH=$PWD

Then, we’ll invoke the configtxgen tool to create the orderer genesis block:

接着,我们将调用 configtxgen 工具来创建排序节点初始区块 (the orderer genesis block):

../bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block

You should see an output similar to the following in your terminal:

你会在终端中看到类似如下的信息:

2017-10-26 19:21:56.301 EDT [common/tools/configtxgen] main -> INFO 001 Loading configuration
2017-10-26 19:21:56.309 EDT [common/tools/configtxgen] doOutputBlock -> INFO 002 Generating genesis block
2017-10-26 19:21:56.309 EDT [common/tools/configtxgen] doOutputBlock -> INFO 003 Writing genesis block

注解

The orderer genesis block and the subsequent artifacts we are about to create will be output into the channel-artifacts directory at the root of this project.

排序服务节点初始区块以及随后我们生成的配置工件,都保存在本项目根目录下的 channel-artifacts 子目录下

Create a Channel Configuration Transaction - 创建通道配置交易

Next, we need to create the channel transaction artifact. Be sure to replace $CHANNEL_NAME or set CHANNEL_NAME as an environment variable that can be used throughout these instructions:

下一步,我们需要创建通道配置交易。在如下的说明中,请确认已经替换命令中的 $CHANNEL_NAME 为实际的通道名称,或者将环境变量 CHANNEL_NAME 设置为实际的通道名称 :

# The channel.tx artifact contains the definitions for our sample channel

# channel.tx 文件中包含了我们示例通道的配置信息

export CHANNEL_NAME=mychannel  && ../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME

You should see an ouput similar to the following in your terminal:

你会在终端中看到类似如下的信息:

2017-10-26 19:24:05.324 EDT [common/tools/configtxgen] main -> INFO 001 Loading configuration
2017-10-26 19:24:05.329 EDT [common/tools/configtxgen] doOutputChannelCreateTx -> INFO 002 Generating new channel configtx
2017-10-26 19:24:05.329 EDT [common/tools/configtxgen] doOutputChannelCreateTx -> INFO 003 Writing new channel tx

Next, we will define the anchor peer for Org1 on the channel that we are constructing. Again, be sure to replace $CHANNEL_NAME or set the environment variable for the following commands. The terminal output will mimic that of the channel transaction artifact:

接着,我们会指定机构 Org1 的锚节点。再一次注意,请确保替换命令中的 $CHANNEL_NAME 为实际的通道名称,或者将环境变量 CHANNEL_NAME 设置为实际的通道名称。 终端中会显示出通道交易工件:

../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP

Now, we will define the anchor peer for Org2 on the same channel:

现在,我们接着指定机构 Org2 的锚节点:

../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org2MSP

Start the network - 启动网络

We will leverage a script to spin up our network. The docker-compose file references the images that we have previously downloaded, and bootstraps the orderer with our previously generated genesis.block.

我们会使用一个脚本来启动我们的网络。docker-compose 文件引用了之前我们已经下载好的镜像文件,启动了一个排序服务节点,该排序服务节点使用的是前文生成的 genesis.block

We want to go through the commands manually in order to expose the syntax and functionality of each call.

我们希望手动输入一遍所有命令,以便了解每个调用的语法和功能。

First let’s start your network:

首先,让我们启动网络:

docker-compose -f docker-compose-cli.yaml up -d

If you want to see the realtime logs for your network, then do not supply the -d flag. If you let the logs stream, then you will need to open a second terminal to execute the CLI calls.

如果你希望看到网络的实时输出,请不要添加 -d 参数。此时,日志将会持续打印,你需要打开另一个终端来执行 CLI 调用。

The CLI container will stick around idle for 1000 seconds. If it’s gone when you need it you can restart it with a simple command:

CLI 容器会持续闲置等待 1000 秒。如果该容器被关闭,可以使用如下命令重启:

docker start cli
Environment variables - 环境变量

For the following CLI commands against peer0.org1.example.com to work, we need to preface our commands with the four environment variables given below. These variables for peer0.org1.example.com are baked into the CLI container, therefore we can operate without passing them. HOWEVER, if you want to send calls to other peers or the orderer, then you will need to provide these values accordingly. Inspect the docker-compose-base.yaml for the specific paths:

随后的 CLI 命令是针对 peer0.org1.example.com 的,在执行命令前,我们需要先设置 4 个环境变量。这些针对 peer0.org1.example.com 的环境变量在 CLI 容器已经被预设好,因此我们此时也可以不设置这些环境变量。但是,如果需要调用其他的对等节点或者是排序服务节点,必须设置好相应的环境变量。查看 docker-compose-base.yaml 文件可以看到具体的路径:

# Environment variables for PEER0

# 针对 PEER0 的环境变量

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
Create & Join Channel - 创建和加入通道

Recall that we created the channel configuration transaction using the configtxgen tool in the Create a Channel Configuration Transaction - 创建通道配置交易 section, above. You can repeat that process to create additional channel configuration transactions, using the same or different profiles in the configtx.yaml that you pass to the configtxgen tool. Then you can repeat the process defined in this section to establish those other channels in your network.

回忆下,我们在 Create a Channel Configuration Transaction - 创建通道配置交易 一节中使用 configtxgen 工具生成了通道配置交易。你可以重复上述过程,在使用 configtxgen 工具时使用和之前 configtx.yaml 文件中相同或者不同的配置,生成新的通道配置交易。随后,你可以重复本节所提的过程,在网络中创建其他的通道。

We will enter the CLI container using the docker exec command:

我们使用 docker exec 命令进入 CLI 容器内:

docker exec -it cli bash

If successful you should see the following:

如果成功,你会看到如下信息:

root@0d78bb69300d:/opt/gopath/src/github.com/hyperledger/fabric/peer#

Next, we are going to pass in the generated channel configuration transaction artifact that we created in the Create a Channel Configuration Transaction - 创建通道配置交易 section (we called it channel.tx) to the orderer as part of the create channel request.

随后,我们会将 Create a Channel Configuration Transaction - 创建通道配置交易 一节中生成的通道配置交易工件 channel.tx 传给排序服务节点,用于创建通道。

We specify our channel name with the -c flag and our channel configuration transaction with the -f flag. In this case it is channel.tx, however you can mount your own configuration transaction with a different name. Once again we will set the CHANNEL_NAME environment variable within our CLI container so that we don’t have to explicitly pass this argument:

我们可以使用 -c 参数指定我们的通道名称,以及使用 -f 参数指定通道配置交易工件。在本示例中,通道配置交易工件是 channel.tx,当然,你可以使用自己生成的任意名字的通道配置交易工件。再一次提醒,我们需要在 CLI 容器中设置环境变量 CHANNEL_NAME,这样就不必在每次输入命令时再显式的输入该参数:

export CHANNEL_NAME=mychannel

# the channel.tx file is mounted in the channel-artifacts directory within your CLI container
# as a result, we pass the full path for the file
# we also pass the path for the orderer ca-cert in order to verify the TLS handshake
# be sure to export or replace the $CHANNEL_NAME variable appropriately

# channel.tx 文件被挂载在 CLI 容器的 channel-artifacts 目录下
# 因此,我们需要传入该文件的绝对路径
# 同时,我们还传入了用于校验 TLS 握手的排序服务节点的 ca 证书
# 请确保设置了环境变量 $CHANNEL_NAME,或者将命令中的 $CHANNEL_NAME 修改为实际值

peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

注解

Notice the -- cafile that we pass as part of this command. It is the local path to the orderer’s root cert, allowing us to verify the TLS handshake.

请注意命令中的 -- cafile 参数,它指向了排序服务节点的根证书,用于完成 TLS 握手的校验。

This command returns a genesis block - <channel-ID.block> - which we will use to join the channel. It contains the configuration information specified in channel.tx If you have not made any modifications to the default channel name, then the command will return you a proto titled mychannel.block.

该命令返回一个初始区块 <channel-ID.block>,我们使用它来加入通道。它包含了 channel.tx 中指定的配置信息。如果你没有修改默认的通道名称,该命令会返回名为 mychannel.block 的文件。

注解

You will remain in the CLI container for the remainder of these manual commands. You must also remember to preface all commands with the corresponding environment variables when targeting a peer other than peer0.org1.example.com.

随后手动执行的命令,都需要在 CLI 容器内执行。同时需要注意,如果希望连接除 peer0.org1.example.com 外的其他对等节点,需要将环境变量设置为相应的值。

Now let’s join peer0.org1.example.com to the channel.

现在,让我们把 peer0.org1.example.com 加入到通道中。

# By default, this joins ``peer0.org1.example.com`` only
# the <channel-ID.block> was returned by the previous command
# if you have not modified the channel name, you will join with mychannel.block
# if you have created a different channel name, then pass in the appropriately named block

# 下述命令默认将 ``peer0.org1.example.com`` 加入 mychannel.block 对应的通道中
# 如果你修改了通道名称,请传入相应的 .block 文件名

 peer channel join -b mychannel.block

You can make other peers join the channel as necessary by making appropriate changes in the four environment variables we used in the Environment variables - 环境变量 section, above.

通过修改 Environment variables - 环境变量 节中的环境变量为相应的值,你可以将所需要的对等节点都加入到通道中。

Rather than join every peer, we will simply join peer0.org2.example.com so that we can properly update the anchor peer definitions in our channel. Since we are overriding the default environment variables baked into the CLI container, this full command will be the following:

我们并不添加所有的对等节点到该通道中,我们只添加 peer0.org2.example.com ,这样我们就可以更新通道的锚节点信息。我们需要覆盖 CLI 容器中的默认环境变量,完整的命令如下:

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp CORE_PEER_ADDRESS=peer0.org2.example.com:7051 CORE_PEER_LOCALMSPID="Org2MSP" CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt peer channel join -b mychannel.block

Alternatively, you could choose to set these environment variables individually rather than passing in the entire string. Once they’ve been set, you simply need to issue the peer channel join command again and the CLI container will act on behalf of peer0.org2.example.com.

另一种方式,你还可以选择单独设置这些环境变量,而不必在执行命令时传入整个设置串。一旦设置好这些环境变量,你需要执行 peer channel join 命令,此时 CLI 容器会连接 peer0.org2.example.com 进行相应操作。

Update the anchor peers - 更新锚节点

The following commands are channel updates and they will propagate to the definition of the channel. In essence, we adding additional configuration information on top of the channel’s genesis block. Note that we are not modifying the genesis block, but simply adding deltas into the chain that will define the anchor peers.

如下的命令用于更新通道,会修改通道的定义。总体而言,我们会在通道的初始区块基础上,增加额外的配置信息。值得注意的是,我们并未修改初始区块,而是在链上增加增量信息,用于定义锚节点。

Update the channel definition to define the anchor peer for Org1 as peer0.org1.example.com:

更新通道配置信息,将机构 Org1 的锚节点设置为 peer0.org1.example.com

peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/Org1MSPanchors.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

Now update the channel definition to define the anchor peer for Org2 as peer0.org2.example.com. Identically to the peer channel join command for the Org2 peer, we will need to preface this call with the appropriate environment variables.

现在,更新通道配置信息,将机构 Org2 的锚节点设置为 peer0.org2.example.com。和结构 Org2 节点的 peer channel join 命令类似,我们需要在此命令前增加相应的环境变量。

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp CORE_PEER_ADDRESS=peer0.org2.example.com:7051 CORE_PEER_LOCALMSPID="Org2MSP" CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/Org2MSPanchors.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Install & Instantiate Chaincode - 安装和实例化链码

注解

We will utilize a simple existing chaincode. To learn how to write your own chaincode, see the Chaincode for Developers 面向开发人员的链码指南 tutorial.

我们会使用一个简单的既有链码。如果想学习如何编写链码,请参考 Chaincode for Developers 面向开发人员的链码指南 教程。

Applications interact with the blockchain ledger through chaincode. As such we need to install the chaincode on every peer that will execute and endorse our transactions, and then instantiate the chaincode on the channel.

应用通过 链码(chaincode) 和区块链账本进行交互。因此,我们需要在每个执行交易或为交易背书的对等节点上安装链码,随后在通道中实例化该链码。

First, install the sample Go or Node.js chaincode onto one of the four peer nodes. These commands place the specified source code flavor onto our peer’s filesystem.

首先,在 4 个对等节点中的 1 个节点上安装示例的 Go 或者 Node.js 链码。这些命令指定了对等节点文件系统上的源码路径。

注解

You can only install one version of the source code per chaincode name and version. The source code exists on the peer’s file system in the context of chaincode name and version; it is language agnostic. Similarly the instantiated chaincode container will be reflective of whichever language has been installed on the peer.

对于每个链码的每个版本,只允许安装相同的源代码。源文件基于链码名称和版本号保存在对等节点文件系统上,编程语言此时是未知的。类似的,实例化后的链码容器会反映出是哪种编程语言被安装在该对等节点上。

Golang

# this installs the Go chaincode

# 安装 Go 链码
peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/chaincode_example02/go/

Node.js

# this installs the Node.js chaincode
# make note of the -l flag; we use this to specify the language

# 安装 Node.js 链码
# 注意 -l 参数,我们使用它来指定编程语言
peer chaincode install -n mycc -v 1.0 -l node -p /opt/gopath/src/github.com/chaincode/chaincode_example02/node/

Next, instantiate the chaincode on the channel. This will initialize the chaincode on the channel, set the endorsement policy for the chaincode, and launch a chaincode container for the targeted peer. Take note of the -P argument. This is our policy where we specify the required level of endorsement for a transaction against this chaincode to be validated.

下一步,实例化通道中的链码,即初始化通道中的链码、设置链码的背书策略以及为每个目标对等节点启动一个链码容器。注意 -P 参数,这是我们指定的背书策略级别,用于校验链码的交易。

In the command below you’ll notice that we specify our policy as -P "OR ('Org0MSP.peer','Org1MSP.peer')". This means that we need “endorsement” from a peer belonging to Org1 OR Org2 (i.e. only one endorsement). If we changed the syntax to AND then we would need two endorsements.

在下面的命令中,你可以看到我们指定的策略为 -P "OR ('Org0MSP.peer','Org1MSP.peer')"。这表示我们需要机构 Org1 或者 机构 Org2 中一个节点的 “背书”(即只需要一个背书)。如果我们将策略改为 AND,则我们需要两个背书。

Golang

# be sure to replace the $CHANNEL_NAME environment variable if you have not exported it
# if you did not install your chaincode with a name of mycc, then modify that argument as well

# 请确保替换了 $CHANNEL_NAME 环境变量,或已经提前设置好
# 如果你安装链码时指定的名称不是 mycc,请修改如下参数为对应值

peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc -v 1.0 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer')"

Node.js

注解

The instantiation of the Node.js chaincode will take roughly a minute. The command is not hanging; rather it is installing the fabric-shim layer as the image is being compiled.

Node.js 链码的实例化会大概耗时 1 分钟。此时命令并没有被挂起,而是在安装 fabric-shim 层以及编译镜像。

# be sure to replace the $CHANNEL_NAME environment variable if you have not exported it
# if you did not install your chaincode with a name of mycc, then modify that argument as well
# notice that we must pass the -l flag after the chaincode name to identify the language

# 请确保替换了 $CHANNEL_NAME 环境变量,或已经提前设置好
# 如果你安装链码时指定的名字不是 mycc,请修改如下参数为对应值
# 注意必须在链码名称后传入 -l 参数来指定链码编程语言

peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc -l node -v 1.0 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer')"

See the endorsement policies documentation for more details on policy implementation.

如果想了解背书策略实现的具体细节,请参考 endorsement policies 文档。

If you want additional peers to interact with ledger, then you will need to join them to the channel, and install the same name, version and language of the chaincode source onto the appropriate peer’s filesystem. A chaincode container will be launched for each peer as soon as they try to interact with that specific chaincode. Again, be cognizant of the fact that the Node.js images will be slower to compile.

如果你想让其他节点和账本进行交互,你需要将它们添加到通道中,然后在相应的对等节点文件系统上,安装同名称、同版本以及同编程语言的链码源文件。对于每个对等节点,当它们试图和指定链码进行交互时,会立即启动该链码对应的容器。再一次提醒,Node.js 镜像的编译速度会比较慢。

Once the chaincode has been instantiated on the channel, we can forgo the l flag. We need only pass in the channel identifier and name of the chaincode.

一旦链码在通道中实例化后,我们可以不再添加 l 参数。我们只需要添加通道标识以及链码名称即可。

Query - 查询

Let’s query for the value of a to make sure the chaincode was properly instantiated and the state DB was populated. The syntax for query is as follows:

让我们通过查询 a 对应的值,来确保链码已经被正确的实例化,以及状态数据库 (state DB) 已经被设置好。查询的语法如下:

# be sure to set the -C and -n flags appropriately

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
Invoke - 调用

Now let’s move 10 from a to b. This transaction will cut a new block and update the state DB. The syntax for invoke is as follows:

现在让我们从 a 转移 10b。这个交易会生成一个新区块并更新状态数据库。执行的语法如下:

# be sure to set the -C and -n flags appropriately

peer chaincode invoke -o orderer.example.com:7050  --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem  -C $CHANNEL_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}'
Query - 查询

Let’s confirm that our previous invocation executed properly. We initialized the key a with a value of 100 and just removed 10 with our previous invocation. Therefore, a query against a should reveal 90. The syntax for query is as follows.

下面确认我们之前的操作被正确执行了。我们将 a 的值初始化为 100,然后在上个操作中转移了 10。因此,再次查询 a 的值应该为 90。查询的语法如下:

# be sure to set the -C and -n flags appropriately

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

We should see the following:

我们应该会看到如下信息:

Query Result: 90

Feel free to start over and manipulate the key value pairs and subsequent invocations.

请任意的从头执行本节内容、修改键值以及后续调用。

What’s happening behind the scenes? - 背后发生了什么?

注解

These steps describe the scenario in which script.sh is run by ‘./byfn.sh up’. Clean your network with ./byfn.sh down and ensure this command is active. Then use the same docker-compose prompt to launch your network again

后续的步骤描述了由 ‘./byfn.sh up’ 启动 script.sh 后的场景。使用 ./byfn.sh down 清空原先的网络,确保该命令执行成功。随后使用相同的 docker-compose 命令再次启动网络。

  • A script - script.sh - is baked inside the CLI container. The script drives the createChannel command against the supplied channel name and uses the channel.tx file for channel configuration.
  • 脚本 - script.sh - 包含在 CLI 容器中。该脚本使用提供的通道名调用 createChannel 命令,并使用 channel.tx 文件作为通道配置文件。
  • The output of createChannel is a genesis block - <your_channel_name>.block - which gets stored on the peers’ file systems and contains the channel configuration specified from channel.tx.
  • createChannel 的输出是一个初始区块 <your_channel_name>.block,该文件保存在对等节点的文件系统下,其中包含了 channel.tx 中指定的通道配置信息。
  • The joinChannel command is exercised for all four peers, which takes as input the previously generated genesis block. This command instructs the peers to join <your_channel_name> and create a chain starting with <your_channel_name>.block.
  • joinChannel 命令以上一步生成的初始区块作为输入,在四个对等节点上都进行了执行。该命令将对等节点加入 <your_channel_name> 通道,同时创建了一个基于 <your_channel_name>.block 的链。
  • Now we have a channel consisting of four peers, and two organizations. This is our TwoOrgsChannel profile.
  • 现在我们拥有一个包含了 2 个结构、4 个对等节点的通道。这就是我们的 TwoOrgsChannel 配置。
  • peer0.org1.example.com and peer1.org1.example.com belong to Org1; peer0.org2.example.com and peer1.org2.example.com belong to Org2
  • peer0.org1.example.compeer1.org1.example.com 属于机构 Org1; peer0.org2.example.compeer1.org2.example.com 属于机构 Org2
  • These relationships are defined through the crypto-config.yaml and the MSP path is specified in our docker compose.
  • 上述关系是在 crypto-config.yaml 中进行定义,MSP 路径则是在我们的 docker-compose 配置文件中指定。
  • The anchor peers for Org1MSP (peer0.org1.example.com) and Org2MSP (peer0.org2.example.com) are then updated. We do this by passing the Org1MSPanchors.tx and Org2MSPanchors.tx artifacts to the ordering service along with the name of our channel.
  • 随后更新了 Org1MSP 的锚节点 (peer0.org1.example.com) 以及 Org2MSP 的锚节点 (peer0.org2.example.com) 。我们通过将 Org1MSPanchors.txOrg2MSPanchors.tx 文件提交给排序服务(其中需要指定我们的通道名称)实现了上述更新。
  • A chaincode - chaincode_example02 - is installed on peer0.org1.example.com and peer0.org2.example.com
  • 一个链码 - chaincode_example02 - 被安装在 peer0.org1.example.compeer0.org2.example.com
  • The chaincode is then “instantiated” on peer0.org2.example.com. Instantiation adds the chaincode to the channel, starts the container for the target peer, and initializes the key value pairs associated with the chaincode. The initial values for this example are [“a”,”100” “b”,”200”]. This “instantiation” results in a container by the name of dev-peer0.org2.example.com-mycc-1.0 starting.
  • 该链码随后在 peer0.org2.example.com 上进行了 “实例化”。实例化的过程中,将该链码添加到通道中、在目标对等节点上启动容器以及初始化了链码中的 key-value 对。本例的初始化值是 [“a”,”100” “b”,”200”]。 “初始化” 过程完成后,一个名为 dev-peer0.org2.example.com-mycc-1.0 的容器被启动。
  • The instantiation also passes in an argument for the endorsement policy. The policy is defined as -P "OR    ('Org1MSP.peer','Org2MSP.peer')", meaning that any transaction must be endorsed by a peer tied to Org1 or Org2.
  • 实例化过程同时还传入一个参数,作为背书策略。本例中的背书策略是 -P "OR    ('Org1MSP.peer','Org2MSP.peer')",表示每一次交易都必须由机构 Org1 或者机构 Org2 的对等节点进行背书。
  • A query against the value of “a” is issued to peer0.org1.example.com. The chaincode was previously installed on peer0.org1.example.com, so this will start a container for Org1 peer0 by the name of dev-peer0.org1.example.com-mycc-1.0. The result of the query is also returned. No write operations have occurred, so a query against “a” will still return a value of “100”.
  • peer0.org1.example.com 发起了一个查找 “a” 对应值的查询。链码已经在 peer0.org1.example.com 上安装完毕,此时会启动一个名为 dev-peer0.org1.example.com-mycc-1.0 的容器(针对机构 Org1 以及对等节点 peer0)。查询的结果随后被返回。此时还没有写入操作发生,所以查询 “a” 值的返回结果为 “100”。
  • An invoke is sent to peer0.org1.example.com to move “10” from “a” to “b”
  • peer0.org1.example.com 发送了一个调用操作,将从 “a” 转移 “10” 给 “b”。
  • The chaincode is then installed on peer1.org2.example.com
  • 随后,链码被安装在 peer1.org2.example.com
  • A query is sent to peer1.org2.example.com for the value of “a”. This starts a third chaincode container by the name of dev-peer1.org2.example.com-mycc-1.0. A value of 90 is returned, correctly reflecting the previous transaction during which the value for key “a” was modified by 10.
  • peer1.org2.example.com 发起了一个查找 “a” 对应值的查询。此时会启动第 3 个容器,名为 dev-peer1.org2.example.com-mycc-1.0。返回值为 90,正确的反映了之前的交易结果,其中 “a” 对应的值被转移了 “10”。
What does this demonstrate? - 演示了哪些内容?

Chaincode MUST be installed on a peer in order for it to successfully perform read/write operations against the ledger. Furthermore, a chaincode container is not started for a peer until an init or traditional transaction - read/write - is performed against that chaincode (e.g. query for the value of “a”). The transaction causes the container to start. Also, all peers in a channel maintain an exact copy of the ledger which comprises the blockchain to store the immutable, sequenced record in blocks, as well as a state database to maintain a snapshot of the current state. This includes those peers that do not have chaincode installed on them (like peer1.org1.example.com in the above example) . Finally, the chaincode is accessible after it is installed (like peer1.org2.example.com in the above example) because it has already been instantiated.

链码 必须 被安装在对等节点上后,才具备了成功读写账本的能力。进一步的,链码容器只有在 初始化 (init) 或者传统读写交易(例如查询 “a” 对应的值)发生时才会启动。 是由交易启动了容器。同时,通道内的所有对等节点各自都维护了账本的一份准确的拷贝,该账本中包含了区块链(用于存储不可变且有序的区块),还包含了状态数据库(用于维护当前状态快照)。并不是每个对等节点都需要安装链码(例如上述例子中的 peer1.org1.example.com)。最后,链码如果已经被实例化过一次, 则在新的对等节点上被安装后即可直接被访问(例如上述例子中的 peer1.org2.example.com)。

How do I see these transactions? - 如何查看交易的具体信息?

Check the logs for the CLI Docker container.

查看 CLI Docker 容器的日志。

docker logs -f cli

You should see the following output:

你会看到如下输出:

2017-05-16 17:08:01.366 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-05-16 17:08:01.366 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-05-16 17:08:01.366 UTC [msp/identity] Sign -> DEBU 006 Sign: plaintext: 0AB1070A6708031A0C08F1E3ECC80510...6D7963631A0A0A0571756572790A0161
2017-05-16 17:08:01.367 UTC [msp/identity] Sign -> DEBU 007 Sign: digest: E61DB37F4E8B0D32C9FE10E3936BA9B8CD278FAA1F3320B08712164248285C54
Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query on peer1.org2 on channel 'mychannel' is successful =====================

===================== All GOOD, BYFN execution completed =====================


 _____   _   _   ____
| ____| | \ | | |  _ \
|  _|   |  \| | | | | |
| |___  | |\  | | |_| |
|_____| |_| \_| |____/

You can scroll through these logs to see the various transactions.

你可以滚动屏幕,看到各个交易的具体信息。

How can I see the chaincode logs? - 如何查看链码的日志?

Inspect the individual chaincode containers to see the separate transactions executed against each container. Here is the combined output from each container:

通过查看各个链码容器,可以看到该容器相关的交易信息。如下是各个容器的日志输出:

$ docker logs dev-peer0.org2.example.com-mycc-1.0
04:30:45.947 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Init
Aval = 100, Bval = 200

$ docker logs dev-peer0.org1.example.com-mycc-1.0
04:31:10.569 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Invoke
Query Response:{"Name":"a","Amount":"100"}
ex02 Invoke
Aval = 90, Bval = 210

$ docker logs dev-peer1.org2.example.com-mycc-1.0
04:31:30.420 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Invoke
Query Response:{"Name":"a","Amount":"90"}

Understanding the Docker Compose topology - 理解 Docker Compose 的拓扑结构

The BYFN sample offers us two flavors of Docker Compose files, both of which are extended from the docker-compose-base.yaml (located in the base folder). Our first flavor, docker-compose-cli.yaml, provides us with a CLI container, along with an orderer, four peers. We use this file for the entirety of the instructions on this page.

本示例(BYFN)提供了两种方案的 Docker Compose 配置文件,两个方案都是基于 docker-compose-base.yaml (位于 base 目录下)扩展而来。第一种方案的配置文件为 docker-compose-cli.yaml,提供了 1 个 CLI 容器、1 个排序服务节点以及 4 个对等节点。在本篇介绍中,我们使用的都是此方案。

注解

the remainder of this section covers a docker-compose file designed for the SDK. Refer to the Node SDK repo for details on running these tests.

本节的剩余内容涵盖了一份用于 SDK 的 docker-compose 配置文件。如果想运行这些测试案例,请参考 Node SDK

The second flavor, docker-compose-e2e.yaml, is constructed to run end-to-end tests using the Node.js SDK. Aside from functioning with the SDK, its primary differentiation is that there are containers for the fabric-ca servers. As a result, we are able to send REST calls to the organizational CAs for user registration and enrollment.

第二种方案的配置文件为 docker-compose-e2e.yaml,该方案构建了一个端到端的测试场景,用于运行 Node.js SDK。除了和 SDK 的交互功能外,该方案最大的区别是包含了作为 fabric-ca 服务器的容器。因此,我们可以通过发送 REST 请求给 CA,实现用户的注册和登记。

If you want to use the docker-compose-e2e.yaml without first running the byfn.sh script, then we will need to make four slight modifications. We need to point to the private keys for our Organization’s CA’s. You can locate these values in your crypto-config folder. For example, to locate the private key for Org1 we would follow this path - crypto-config/peerOrganizations/org1.example.com/ca/. The private key is a long hash value followed by _sk. The path for Org2 would be - crypto-config/peerOrganizations/org2.example.com/ca/.

如果你希望不通过 byfn.sh 脚本而直接使用 docker-compose-e2e.yaml 的话,需要做 4 处修改。我们需要指定机构的 CA 的密钥。你可以指定 crypto-config 文件夹中的值。例如,机构 Org1 的密钥值,可以指定为 crypto-config/peerOrganizations/org1.example.com/ca/。密钥包含了一个长哈希值,并以 _sk 结尾。机构 Org2 的密钥可以是 crypto-config/peerOrganizations/org2.example.com/ca/

In the update the FABRIC_CA_SERVER_TLS_KEYFILE variable for ca0 and ca1. You also need to edit the path that is provided in the command to start the ca server. You are providing the same private key twice for each CA container.

更新 docker-compose-e2e.yaml 中 ca0 和 ca1 的 FABRIC_CA_SERVER_TLS_KEYFILE 值。你还需要修改启动 ca 服务器的命令中的路径。对每一个 CA 容器,你需要提供两次相同的密钥。

Using CouchDB - 使用 CouchDB

The state database can be switched from the default (goleveldb) to CouchDB. The same chaincode functions are available with CouchDB, however, there is the added ability to perform rich and complex queries against the state database data content contingent upon the chaincode data being modeled as JSON.

状态数据库可以从默认的 goleveldb 切换到 CouchDB。CouchDB 提供了相同的链码函数,此外,对于采用 JSON 结构的链码数据,CouchDB 还提供了进行富查询和复杂查询的能力。

To use CouchDB instead of the default database (goleveldb), follow the same procedures outlined earlier for generating the artifacts, except when starting the network pass docker-compose-couch.yaml as well:

要想使用 CouchDB 替换默认的数据库 (goleveldb),按照之前完全相同的步骤生成相关配置工件,只是在启动网络时,如下所示增加 docker-compose-couch.yaml 文件:

docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d

chaincode_example02 should now work using CouchDB underneath.

chaincode_example02 此时就是基于 CouchDB 运行。

注解

If you choose to implement mapping of the fabric-couchdb container port to a host port, please make sure you are aware of the security implications. Mapping of the port in a development environment makes the CouchDB REST API available, and allows the visualization of the database via the CouchDB web interface (Fauxton). Production environments would likely refrain from implementing port mapping in order to restrict outside access to the CouchDB containers.

如果你将 fabric-couchdb 容器的端口映射到主机端口,请确保你明白其中的安全问题。在开发环境中,端口映射后可以直接访问 CouchDB 的 REST API,还可以通过 CouchDB 网页接口 (Fauxton) 查看可视化后的数据。在生产环境中,应该尽量避免端口映射,严格限制外界对 CouchDB 容器的访问。

You can use chaincode_example02 chaincode against the CouchDB state database using the steps outlined above, however in order to exercise the CouchDB query capabilities you will need to use a chaincode that has data modeled as JSON, (e.g. marbles02). You can locate the marbles02 chaincode in the fabric/examples/chaincode/go directory.

基于上述步骤,你可以基于 CouchDB 状态数据库运行 chaincode_example02 链码,但是,如果想利用 CouchDB 的查询能力,你还需要一个采用 JSON 结构保存数据的链码 (例如 marbles02)。你可以在 fabric/examples/chaincode/go 目录下找到 marbles02 链码的源文件。

We will follow the same process to create and join the channel as outlined in the Create & Join Channel - 创建和加入通道 section above. Once you have joined your peer(s) to the channel, use the following steps to interact with the marbles02 chaincode:

我们将会采用和 Create & Join Channel - 创建和加入通道 一节相同的步骤去创建和加入通道。在将你的对等节点(们)加入到通道后,采用如下步骤去和 marbles02 链码进行交互:

  • Install and instantiate the chaincode on peer0.org1.example.com:
  • peer0.org1.example.com 上安装和实例化链码:
# be sure to modify the $CHANNEL_NAME variable accordingly for the instantiate command

# 请确保将 $CHANNEL_NAME 修改为实例化命令中对应的值

peer chaincode install -n marbles -v 1.0 -p github.com/chaincode/marbles02/go
peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -v 1.0 -c '{"Args":["init"]}' -P "OR ('Org0MSP.peer','Org1MSP.peer')"
  • Create some marbles and move them around:
  • 创建一些弹珠 (marble) 并移动它们:
# be sure to modify the $CHANNEL_NAME variable accordingly

# 请确保将 $CHANNEL_NAME 修改为合适的值

peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble1","blue","35","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble2","red","50","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble3","blue","70","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["transferMarble","marble2","jerry"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["transferMarblesBasedOnColor","blue","jerry"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["delete","marble1"]}'
  • If you chose to map the CouchDB ports in docker-compose, you can now view the state database through the CouchDB web interface (Fauxton) by opening a browser and navigating to the following URL:

  • 如果你在 docker-compose 中对 CouchDB 进行了端口映射,你可以使用 CouchDB 网页接口 (Fauxton) 去查看状态数据库的内容,需要打开浏览器并输入如下网址:

    http://localhost:5984/_utils

You should see a database named mychannel (or your unique channel name) and the documents inside it.

你会看到一个名为 mychannel (或你所指定的通道名称)的数据库及其内部的文档。

注解

For the below commands, be sure to update the $CHANNEL_NAME variable appropriately.

对于如下命令,请确保将 $CHANNEL_NAME 修改为合适的值

You can run regular queries from the CLI (e.g. reading marble2):

你可以从 CLI 发起常规查询(例如读取 marble2):

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["readMarble","marble2"]}'

The output should display the details of marble2:

输出为 marble2 的详细信息:

Query Result: {"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}

You can retrieve the history of a specific marble - e.g. marble1:

你可以获取指定弹珠的历史信息 - 例如 marble1

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["getHistoryForMarble","marble1"]}'

The output should display the transactions on marble1:

输出为 marble1 相关的所有交易:

Query Result: [{"TxId":"1c3d3caf124c89f91a4c0f353723ac736c58155325f02890adebaa15e16e6464", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"tom"}},{"TxId":"755d55c281889eaeebf405586f9e25d71d36eb3d35420af833a20a2f53a3eefd", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"jerry"}},{"TxId":"819451032d813dde6247f85e56a89262555e04f14788ee33e28b232eef36d98f", "Value":}]

You can also perform rich queries on the data content, such as querying marble fields by owner jerry:

你还可以基于数据内容发起一个富查询,例如查询所有者 (owner) 为 jerry 的弹珠:

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarblesByOwner","jerry"]}'

The output should display the two marbles owned by jerry:

输出为 jerry 拥有的两个弹珠信息:

Query Result: [{"Key":"marble2", "Record":{"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}},{"Key":"marble3", "Record":{"color":"blue","docType":"marble","name":"marble3","owner":"jerry","size":70}}]

Why CouchDB - 为何使用 CouchDB

CouchDB is a kind of NoSQL solution. It is a document oriented database where document fields are stored as key-value mpas. Fields can be either a simple key/value pair, list, or map. In addition to keyed/composite-key/key-range queries which are supported by LevelDB, CouchDB also supports full data rich queries capability, such as non-key queries against the whole blockchain data, since its data content is stored in JSON format and fully queryable. Therefore, CouchDB can meet chaincode, auditing, reporting requirements for many use cases that not supported by LevelDB.

CouchDB 是 NoSQL 的一种解决方案。它是一种面向文档的数据库,其中文档字段以键-值 (key-value) 的形式保存。字段可以是简单的键值对 (key/value pair)、列表 (list) 或者图 (map)。CouchDB 不仅支持 LevelDB 所支持的 keyed/composite-key/key-range 等查询方式,还提供了全数据富查询的能力,例如对于整个区块链数据的无键查询 (non-key queries),这是因为 CouchDB 的数据内容使用了 JSON 格式进行存储,提供了全方位的查询能力。因此,在一些 LevelDB 无法支持的应用场景下,CouchDB 还可以满足链码、审计和报告等需求。

CouchDB can also enhance the security for compliance and data protection in the blockchain. As it is able to implement field-level security through the filtering and masking of individual attributes within a transaction, and only authorizing the read-only permission if needed.

CouchDB 还可以增强区块链中合规和数据保护的安全性。通过过滤和提取交易中的独立属性,它具备了字段级别的安全性,在需要时授予只读权限即可。

In addition, CouchDB falls into the AP-type (Availability and Partition Tolerance) of the CAP theorem. It uses a master-master replication model with Eventual Consistency. More information can be found on the Eventual Consistency page of the CouchDB documentation. However, under each fabric peer, there is no database replicas, writes to database are guaranteed consistent and durable (not Eventual Consistency).

此外,CouchDB 是 CAP 理论中的 AP 类型 (可用性和分区容错性,Availability and Partition Tolerance)。它利用主 - 主复制模型 (master-master replication model) 实现了 最终一致性 (Eventual Consistency)。更详细的信息请参考 Eventual Consistency page of the CouchDB documentation 。然而,在每个 fabric 的对等节点内,并不包含数据库拷贝,对数据库的写入是担保形式的一致性和持久性 (非 最终一致性 )。

CouchDB is the first external pluggable state database for Fabric, and there could and should be other external database options. For example, IBM enables the relational database for its blockchain. And the CP-type (Consistency and Partition Tolerance) databases may also in need, so as to enable data consistency without application level guarantee.

CouchDB 是 Fabric 的第一个可插拔的外部状态数据库,可以有也应该有其他的外部数据库选项。例如,IBM 在它的区块链中采用了关系型数据库。此外,CP 类型 (一致性和分区容错性,Consistency and Partition Tolerance) 的数据库也是需要的,这样可以在不需要应用层担保的情况下实现数据的一致性。

A Note on Data Persistence - 数据持久化的注意事项

If data persistence is desired on the peer container or the CouchDB container, one option is to mount a directory in the docker-host into a relevant directory in the container. For example, you may add the following two lines in the peer container specification in the docker-compose-base.yaml file:

如果对等节点的容器或者 CouchDB 容器需要进行数据持久化,一种方法是将主机的目录挂载到容器内的相关目录。例如你可以将如下两行添加到 docker-compose-base.yaml 文件中对等节点容器的配置中:

volumes:
 - /var/hyperledger/peer0:/var/hyperledger/production

For the CouchDB container, you may add the following two lines in the CouchDB container specification:

对于 CouchDB 容器,你可以添加如下两行到 CouchDB 容器的配置中:

volumes:
 - /var/hyperledger/couchdb0:/opt/couchdb/data

Troubleshooting - 疑难解答

  • Always start your network fresh. Use the following command to remove artifacts, crypto, containers and chaincode images:

  • 总是在全新的环境下启动你的网络。 使用如下命令来删除工件、加密文件、容器以及链码镜像:

    ./byfn.sh -m down
    

    注解

    You will see errors if you do not remove old containers and images.

    如果你没有删除旧的容器和镜像,你 将会 看到错误

  • If you see Docker errors, first check your docker version (预备知识), and then try restarting your Docker process. Problems with Docker are oftentimes not immediately recognizable. For example, you may see errors resulting from an inability to access crypto material mounted within a container.

  • 如果你看到 Docker 错误,首先检查你的 docker 版本 (预备知识),随后尝试重启你的 Docker 进程。Docker 相关的问题有时并不能很直观的被发现。例如,如果容器内没有权限读取挂载的加密文件,你会看到错误。

    If they persist remove your images and start from scratch:

    如果这些错误始终存在,删除你的镜像,随后从头开始:

    docker rm -f $(docker ps -aq)
    docker rmi -f $(docker images -q)
    
  • If you see errors on your create, instantiate, invoke or query commands, make sure you have properly updated the channel name and chaincode name. There are placeholder values in the supplied sample commands.

  • 如果在执行创建、实例化、调用或查询链码命令时遇到错误,请确保你已经正确更新了通道名称和链码名称。在提供的示例命令中包含一些占位的变量。

  • If you see the below error:

  • 如果你看到如下错误

    Error: Error endorsing chaincode: rpc error: code = 2 desc = Error installing chaincode code mycc:1.0(chaincode /var/hyperledger/production/chaincodes/mycc.1.0 exits)
    

    You likely have chaincode images (e.g. dev-peer1.org2.example.com-mycc-1.0 or dev-peer0.org1.example.com-mycc-1.0) from prior runs. Remove them and try again.

    很有可能是还有之前运行网络时生成的链码镜像(例如 dev-peer1.org2.example.com-mycc-1.0dev-peer0.org1.example.com-mycc-1.0 )。把它们删除后重试。

    docker rmi -f $(docker images | grep peer[0-9]-peer[0-9] | awk '{print $3}')
    
  • If you see something similar to the following:

  • 如果你看到类似如下的内容:

    Error connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
    Error: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
    

    Make sure you are running your network against the “1.0.0” images that have been retagged as “latest”.

    请确保你是基于 “1.1.0” 版本的镜像运行你的网络,并且这些镜像都被标记为 “latest”。

  • If you see the below error:

  • 如果你看到如下错误:

    [configtx/tool/localconfig] Load -> CRIT 002 Error reading configuration: Unsupported Config Type ""
    panic: Error reading configuration: Unsupported Config Type ""
    

    Then you did not set the FABRIC_CFG_PATH environment variable properly. The configtxgen tool needs this variable in order to locate the configtx.yaml. Go back and execute an export FABRIC_CFG_PATH=$PWD, then recreate your channel artifacts.

    这是由于你没有设置环境变量 FABRIC_CFG_PATH 导致的。configtxgen 工具需要依赖这个变量来访问 configtx.yaml。执行 export FABRIC_CFG_PATH=$PWD ,然后重新创建你的通道配置工件。

  • To cleanup the network, use the down option:

  • 使用 down 选项清理网络:

    ./byfn.sh -m down
    
  • If you see an error stating that you still have “active endpoints”, then prune your Docker networks. This will wipe your previous networks and start you with a fresh environment:

  • 如果你看到错误提示是还有 “active endpoints”,请删除你的 Docker 网络。如下命令会清空之前的网络,使你可以从一个全新环境开始:

    docker network prune
    

    You will see the following message:

    你会看到如下信息:

    WARNING! This will remove all networks not used by at least one container.
    Are you sure you want to continue? [y/N]
    

    Select y.

    选择 y

注解

If you continue to see errors, share your logs on the fabric-questions channel on Hyperledger Rocket Chat or on StackOverflow.

如果你还是遇到错误,请将你的日志分享到 Hyperledger Rocket Chatfabric-questions 频道下,或者 StackOverflow

Writing Your First Application - 编写你的第一个应用

注解

If you’re not yet familiar with the fundamental architecture of a Fabric network, you may want to visit the Introduction and Building Your First Network - 构建你的第一个网络 documentation prior to continuing.

如果你对 Fabric 网络的基本架构还不了解,请在开始阅读本文之前,先阅读 IntroductionBuilding Your First Network - 构建你的第一个网络 文档。

In this section we’ll be looking at a handful of sample programs to see how Fabric apps work. These apps (and the smart contract they use) – collectively known as fabcar – provide a broad demonstration of Fabric functionality. Notably, we will show the process for interacting with a Certificate Authority and generating enrollment certificates, after which we will leverage these generated identities (user objects) to query and update a ledger.

在本节中,我们会通过一些示例程序了解到 Fabric 应用是如何工作的。 这些应用(包括它们利用的智能合约,即 fabcar)提供了一个对 Fabric 功能的全方面的演示。 特别的,我们会展示与证书认证服务交互并生成登记证书的过程,随后我们利用这些生成的身份标识文件(用户侧)来查询和更新账本。

We’ll go through three principle steps:

我们会展示如下三个主要过程:

1. Setting up a development environment. Our application needs a network to interact with, so we’ll download one stripped down to just the components we need for registration/enrollment, queries and updates:

1. 构建一个开发环境。 我们的应用需要和一个网络进行交互,所以我们需要下载一个经过裁剪的网络,以刚好满足我们注册、登记、查询和更新的要求:

_images/AppConceptsOverview.png

2. Learning the parameters of the sample smart contract our app will use. Our smart contract contains various functions that allow us to interact with the ledger in different ways. We’ll go in and inspect that smart contract to learn about the functions our applications will be using.

2. 了解我们应用所使用的示例智能合约的参数。 我们的智能合约包含多个函数,使得我们可以和账本进行多种交互。 我们会仔细阅读智能合约,深入了解我们应用所使用到的函数。

3. Developing the applications to be able to query and update assets on the ledger. We’ll get into the app code itself (our apps have been written in Javascript) and manually manipulate the variables to run different kinds of queries and updates.

3. 开发一个可以查询和更新账本的应用。 我们会阅读应用代码本身(我们的应用是基于 Javascript 编写的),手动的修改变量,实现不同的查询和更新操作。

After completing this tutorial you should have a basic understanding of how an application is programmed in conjunction with a smart contract to interact with the ledger (i.e. the peer) on a Fabric network.

完成本教程后,你会对如下过程有个基本了解:如何编写应用和智能合约,实现和 Fabric 网络的账本(例如对等节点)进行交互。

Setting up your Dev Environment - 构建一个开发环境

First thing, let’s download the Fabric images and the accompanying artifacts for the network and applications...

首先,需要下载 Fabric 镜像以及网络和应用的相关文件...

Visit the 预备知识 page and ensure you have the necessary dependencies installed on your machine.

访问 预备知识 页面,确保你已经安装了所有必要的依赖。

Next, visit the Hyperledger Fabric 示例 page and follow the provided instructions. Return to this tutorial once you have cloned the fabric-samples repository, and downloaded the latest stable Fabric images and available utilities.

下一步,访问 Hyperledger Fabric 示例 页面并按照所提供的说明进行操作。 一旦完成克隆 fabric-samples 仓库后,返回本教程,随后下载最新的稳定版 Fabric 镜像以及可用的工具。

At this point everything should be installed. Navigate to the fabcar subdirectory within your fabric-samples repository and take a look at what’s inside:

至此,所有需要的依赖应该已经都安装好。 进入 fabric-samples 仓库的 fabcar 子目录,查看里面有哪些文件:

cd fabric-samples/fabcar  && ls

You should see the following:

你会看到如下输出:

enrollAdmin.js     invoke.js       package.json    query.js        registerUser.js startFabric.sh

Before starting we also need to do a little housekeeping. Run the following command to kill any stale or active containers:

在开始之前,我们需要进行一些清理工作。 运行下述命令,关闭所有的容器:

docker rm -f $(docker ps -aq)

Clear any cached networks:

清空已缓存的网络:

# Press 'y' when prompted by the command

docker network prune

And lastly if you’ve already run through this tutorial, you’ll also want to delete the underlying chaincode image for the fabcar smart contract. If you’re a user going through this content for the first time, then you won’t have this chaincode image on your system:

最后,如果你之前已经运行过本教程的内容,需要删除 fabcar 智能合约对应的链码镜像。 如果你是第一次阅读和运行本教程的内容,你的机器上不会有这些链码镜像。

docker rmi dev-peer0.org1.example.com-fabcar-1.0-5c906e402ed29f20260ae42283216aa75549c571e2e380f3615826365d8269ba
Install the clients & launch the network - 安装连接客户端并启动网络

注解

The following instructions require you to be in the fabcar subdirectory within your local clone of the fabric-samples repo. Remain at the root of this subdirectory for the remainder of this tutorial.

下述指令需要你位于本地的 fabric-samples 仓库目录下的 fabcar 子目录下。 本教程随后部分也需要你始终保持在该子目录下。

Run the following command to install the Fabric dependencies for the applications. We are concerned with fabric-ca-client which will allow our app(s) to communicate with the CA server and retrieve identity material, and with fabric-client which allows us to load the identity material and talk to the peers and ordering service.

运行如下命令,安装应用所需要的 Fabric 相关依赖。 我们通过使用 fabric-ca-client 和 CA 服务器进行交互获取身份标识文件,然后使用 fabric-client 加载这些身份标识文件,并与对等节点和排序服务进行交互。

npm install

Launch your network using the startFabric.sh shell script. This command will spin up our various Fabric entities and launch a smart contract container for chaincode written in Golang:

使用 startFabric.sh 脚本启动你的网络。 这个命令会启动多个 Fabric 实体,并且启动一个基于 Golang 编写的链码的智能合约容器。

./startFabric.sh

You also have the option of running this tutorial against chaincode written in Node.js. If you’d like to pursue this route, issue the following command instead:

你同样可以使用基于 Node.js 编写的智能合约来运行本教程。 如果你想这么做,使用如下的指令:

./startFabric.sh node

注解

Be aware that the Node.js chaincode scenario will take roughly 90 seconds to complete; perhaps longer. The script is not hanging, rather the increased time is a result of the fabric-shim being installed as the chaincode image is being built.

注意完成 Node.js 链码场景的构建会消耗大约 90 秒甚至更长的时间。 脚本并没有被挂起,增加的时间是安装 fabric-shim 和编译链码镜像造成的。

Alright, now that you’ve got a sample network and some code, let’s take a look at how the different pieces fit together.

好的,现在你已经有了一个示例网络以及一些代码,让我们看看不同部分之间是如何相互适配的。

How Applications Interact with the Network - 应用是如何和网络进行交互的

For a more in-depth look at the components in our fabcar network (and how they’re deployed) as well as how applications interact with those components on more of a granular level, see Understanding the Fabcar Network.

如果想深入了解 fabcar 网络的每一个组件(包括他们是如何部署的),以及应用是如何与这些组件之间进行交互的,请参考 Understanding the Fabcar Network 文档。

Developers more interested in seeing what applications do – as well as looking at the code itself to see how an application is constructed – should continue. For now, the most important thing to know is that applications use a software development kit (SDK) to access the APIs that permit queries and updates to the ledger.

希望了解应用是如何构建以及运行的开发者,请继续阅读本文档。 到目前为止最需要明确的事情是,应用使用了一个软件开发套件(SDK)来访问 APIs,实现对账本的查询和更新。

Enrolling the Admin User - 登记管理员用户

注解

The following two sections involve communication with the Certificate Authority. You may find it useful to stream the CA logs when running the upcoming programs.

随后的两个小节包含了与认证授权管理(CA)的交互。 你会发现在随后的程序中,查看 CA 的日志是非常有帮助。

To stream your CA logs, split your terminal or open a new shell and issue the following:

为了查看你的 CA 日志,将终端分屏或者打开一个新的终端并输入如下命令:

docker logs -f ca.example.com

Now hop back to your terminal with the fabcar content...

现在,回到你的包含 fabcar 内容的终端...

When we launched our network, an admin user - admin - was registered with our Certificate Authority. Now we need to send an enroll call to the CA server and retrieve the enrollment certificate (eCert) for this user. We won’t delve into enrollment details here, but suffice it to say that the SDK and by extension our applications need this cert in order to form a user object for the admin. We will then use this admin object to subsequently register and enroll a new user. Send the admin enroll call to the CA server:

当我们启动我们的网络是,一个管理员用户 - admin - 被注册到我们的认证授权管理中。 现在我们需要发送一个登记请求到 CA 服务器,并且获取该用户登记证书(eCert)。 这里我们并不会深入讨论登记的细节,但是注意 SDK 以及基于我们应用的扩展都需要这个证书来构建一个管理员用户。 随后,我们会使用这个管理员来注册和登记新的用户。 发送管理员登记请求到 CA 服务器:

node enrollAdmin.js

This program will invoke a certificate signing request (CSR) and ultimately output an eCert and key material into a newly created folder - hfc-key-store - at the root of this project. Our apps will then look to this location when they need to create or load the identity objects for our various users.

这个程序会执行一个证书签名请求(CSR),最后输出 eCert 和密钥文件到一个新创建的目录下 - hfc-key-store - 该目录位于项目的根目录下。 随后,我们的应用在需要创建或者加载用户的身份标识对象时,会查看这个目录。

Register and Enroll user1 - 注册和登记 user1

With our newly generated admin eCert, we will now communicate with the CA server once more to register and enroll a new user. This user - user1 - will be the identity we use when querying and updating the ledger. It’s important to note here that it is the admin identity that is issuing the registration and enrollment calls for our new user (i.e. this user is acting in the role of a registrar). Send the register and enroll calls for user1:

使用我们新生成的管理员 eCert,我们可以再次和 CA 服务器进交互,注册和登记一个新用户。 这个用户 - user1 - 会被我们用于查询和更新账本。 值得注意的是,只有 admin 身份可以执行对新用户的注册和登记请求(该用户扮演了一个登记员的角色)。 发送 user1 的注册和登记请求:

node registerUser.js

Similar to the admin enrollment, this program invokes a CSR and outputs the keys and eCert into the hfc-key-store subdirectory. So now we have identity material for two separate users - admin & user1. Time to interact with the ledger...

和登记管理员时类似,该程序执行了一个 CSR,输出密钥和 eCert 文件到 hfc-key-store 子目录下。 现在我们有了两个用户的身份标识文件 - adminuser1。 现在到了和账本进行交互的时间了...

Querying the Ledger - 查询账本

Queries are how you read data from the ledger. This data is stored as a series of key/value pairs, and you can query for the value of a single key, multiple keys, or – if the ledger is written in a rich data storage format like JSON – perform complex searches against it (looking for all assets that contain certain keywords, for example).

查询操作实际上就是如果从账本读取数据。这些数据是以键/值对的形式存储的,我们可以查询一个键或者多个键对应的值,或者如果账本是以JSON等富数据存储格式写入的,我们可以进行复杂的查询(比如,可以查询所有包含有特定关键词的资产)。

This is a representation of how a query works:

下面的图展示了查询操作是如果工作的:

_images/QueryingtheLedger.png

First, let’s run our query.js program to return a listing of all the cars on the ledger. We will use our second identity - user1 - as the signing entity for this application. The following line in our program specifies user1 as the signer:

首先,我们运行``query.js``返回账本记录的所有汽车的列表。我们会使用我们的第二个身份 - ``user1``作为这个应用的签名实体。程序中下面这行指定``user1``作为签名者:

fabric_client.getUserContext('user1', true);

Recall that the user1 enrollment material has already been placed into our hfc-key-store subdirectory, so we simply need to tell our application to grab that identity. With the user object defined, we can now proceed with reading from the ledger. A function that will query all the cars, queryAllCars, is pre-loaded in the app, so we can simply run the program as is:

回想``user1``的注册信息,我们已经放置在``hfc-key-store``子目录,所以我们仅需要告诉我们的应用从这个子目录获取身份信息。 根据定义的用户对象,我们可以继续读取账本数据。 应用中一个函数``queryAllCars``用来查询所有的汽车,所以我们仅需要启动这个应用程序:

node query.js

It should return something like this:

它应该返回如下的一下信息:

Successfully loaded user1 from persistence
Query has completed, checking results
Response is  [{"Key":"CAR0", "Record":{"colour":"blue","make":"Toyota","model":"Prius","owner":"Tomoko"}},
{"Key":"CAR1",   "Record":{"colour":"red","make":"Ford","model":"Mustang","owner":"Brad"}},
{"Key":"CAR2", "Record":{"colour":"green","make":"Hyundai","model":"Tucson","owner":"Jin Soo"}},
{"Key":"CAR3", "Record":{"colour":"yellow","make":"Volkswagen","model":"Passat","owner":"Max"}},
{"Key":"CAR4", "Record":{"colour":"black","make":"Tesla","model":"S","owner":"Adriana"}},
{"Key":"CAR5", "Record":{"colour":"purple","make":"Peugeot","model":"205","owner":"Michel"}},
{"Key":"CAR6", "Record":{"colour":"white","make":"Chery","model":"S22L","owner":"Aarav"}},
{"Key":"CAR7", "Record":{"colour":"violet","make":"Fiat","model":"Punto","owner":"Pari"}},
{"Key":"CAR8", "Record":{"colour":"indigo","make":"Tata","model":"Nano","owner":"Valeria"}},
{"Key":"CAR9", "Record":{"colour":"brown","make":"Holden","model":"Barina","owner":"Shotaro"}}]

These are the 10 cars. A black Tesla Model S owned by Adriana, a red Ford Mustang owned by Brad, a violet Fiat Punto owned by Pari, and so on. The ledger is key/value based and in our implementation the key is CAR0 through CAR9. This will become particularly important in a moment.

这里有10辆车。一辆黑色的特斯拉Model S,拥有者是Adriana。一辆福特野马属于Brad,一辆Pari的菲亚特Punto(朋多)等等。这个账本是键/值对集合,在我们的实现中键是``CAR0`` through CAR9。这些信息在有些时刻将会特别重要。

Let’s take a closer look at this program. Use an editor (e.g. atom or visual studio) and open query.js.

让我们来仔细看看这个程序。用一个编辑器(比如atom或者visual studio)来打开 query.js

The initial section of the application defines certain variables such as channel name, cert store location and network endpoints. In our sample app, these variables have been baked-in, but in a real app these variables would have to be specified by the app dev.

在程序开头定义了一些变量如通道名称、证书存储位置和网络端点。在我们的示例,这些变量已经定义好了,但是在真实的应用中这些变量需要我们在开发过程中指定。

var channel = fabric_client.newChannel('mychannel');
var peer = fabric_client.newPeer('grpc://localhost:7051');
channel.addPeer(peer);

var member_user = null;
var store_path = path.join(__dirname, 'hfc-key-store');
console.log('Store path:'+store_path);
var tx_id = null;

This is the chunk where we construct our query:

这是我们构造查询的代码片段:

// queryCar chaincode function - requires 1 argument, ex: args: ['CAR4'],
// queryAllCars chaincode function - requires no arguments , ex: args: [''],
const request = {
  //targets : --- letting this default to the peers assigned to the channel
  chaincodeId: 'fabcar',
  fcn: 'queryAllCars',
  args: ['']
};

When the application ran, it invoked the fabcar chaincode on the peer, ran the queryAllCars function within it, and passed no arguments to it.

当程序运行的时候,它会调用节点上的``fabcar``链码上的``queryAllCars``方法,没有传递其他参数。

To take a look at the available functions within our smart contract, navigate to the chaincode/fabcar/go subdirectory at the root of fabric-samples and open fabcar.go in your editor.

来看一下智能合约中的方法,访问根目录下``fabric-samples``的子目录``chaincode/fabcar/go``,用编辑器打开``fabcar.go``文件。

注解

These same functions are defined within the Node.js version of the fabcar chaincode.

注解

同样的方法在Node.js版本的``fabcar``链码中也定义了。

You’ll see that we have the following functions available to call: initLedger, queryCar, queryAllCars, createCar, and changeCarOwner.

你会看到我们有下面一些方法可供调用:initLedger, queryCar, queryAllCars, createCar, 和 changeCarOwner

Let’s take a closer look at the queryAllCars function to see how it interacts with the ledger.

我们仔细看一下``queryAllCars``方法是如何与账本交互的。

func (s *SmartContract) queryAllCars(APIstub shim.ChaincodeStubInterface) sc.Response {

      startKey := "CAR0"
      endKey := "CAR999"

      resultsIterator, err := APIstub.GetStateByRange(startKey, endKey)

This defines the range of queryAllCars. Every car between CAR0 and CAR999 – 1,000 cars in all, assuming every key has been tagged properly – will be returned by the query.

这里定义了``queryAllCars``方法的范围,所有在``CAR0``和``CAR999``之间的1000辆汽车。假定每一个键都有对应的值,而且会返回给查询操作。

Below is a representation of how an app would call different functions in chaincode. Each function must be coded against an available API in the chaincode shim interface, which in turn allows the smart contract container to properly interface with the peer ledger.

下图展示的是一个应用如何调用链码中不同的方法。每一个方法必须实现链码的shim接口,这样智能合约容器才能跟节点的账本进行交互。

_images/RunningtheSample.png

We can see our queryAllCars function, as well as one called createCar, that will allow us to update the ledger and ultimately append a new block to the chain in a moment.

我们看到``queryAllCars``方法和``createCar``方法可以让我们来更新账本,最后可以添加一个新的区块到我们的链上。

But first, go back to the query.js program and edit the constructor request to query CAR4. We do this by changing the function in query.js from queryAllCars to queryCar and passing CAR4 as the specific key.

但是首先,回到``query.js``,编辑request构造函数来查询``CAR4``。我们通过改变``query.js``中的``queryAllCars`` 为 queryCar,并且传递参数``CAR4``作为指定的键。

The query.js program should now look like this:

``query.js``程序应该像下面这样:

const request = {
  //targets : --- letting this default to the peers assigned to the channel
  chaincodeId: 'fabcar',
  fcn: 'queryCar',
  args: ['CAR4']
};

Save the program and navigate back to your fabcar directory. Now run the program again:

保存程序,回到``fabcar``目录,再次运行程序:

node query.js

You should see the following:

现在你可以看到下面的输出:

{"colour":"black","make":"Tesla","model":"S","owner":"Adriana"}

If you go back and look at the result from when we queried every car before, you can see that CAR4 was Adriana’s black Tesla model S, which is the result that was returned here.

如果你回头看之前运行查询汽车资产的输出结果,``CAR4``是Adriana的黑色特斯拉Model S,跟这里的输出结果一致。

Using the queryCar function, we can query against any key (e.g. CAR0) and get whatever make, model, color, and owner correspond to that car.

使用``queryCar``方法我们能够根据任何一个键(比如``CAR0``)查询得到汽车的制造商,型号,颜色和拥有者。

Great. At this point you should be comfortable with the basic query functions in the smart contract and the handful of parameters in the query program. Time to update the ledger...

很好!目前为止你应该很熟悉这个查询程序中的智能合约的基本查询功能和有用的参数。现在再来更新下账本...

Updating the Ledger - 更新账本

Now that we’ve done a few ledger queries and added a bit of code, we’re ready to update the ledger. There are a lot of potential updates we could make, but let’s start by creating a car.

前面我们添加了一些代码并且已经做了一些账本的查询操作,我们也准备好了更新账本。这里有一些潜在的更新我们可以进行,但是我们从添加一辆汽车资产开始。

Below we can see how this process works. An update is proposed, endorsed, then returned to the application, which in turn sends it to be ordered and written to every peer’s ledger:

下图我们将看到这个过程如何进行。一个更新从提案到背书,再返回应用程序,最后发送到排序服务写入每一个节点的账本:

_images/UpdatingtheLedger.png

Our first update to the ledger will be to create a new car. We have a separate Javascript program – invoke.js – that we will use to make updates. Just as with queries, use an editor to open the program and navigate to the code block where we construct our invocation:

我们第一个更新账本操作是添加一辆新汽车资产。我们有一个独立的Javascript程序 – invoke.js – 来进行更新操作。跟查询一样,我们用编辑器打开程序,移动到我们创建的调用的代码块:

// createCar chaincode function - requires 5 args, ex: args: ['CAR12', 'Honda', 'Accord', 'Black', 'Tom'],
// changeCarOwner chaincode function - requires 2 args , ex: args: ['CAR10', 'Barry'],
// must send the proposal to endorsing peers
var request = {
  //targets: let default to the peer assigned to the client
  chaincodeId: 'fabcar',
  fcn: '',
  args: [''],
  chainId: 'mychannel',
  txId: tx_id
};

You’ll see that we can call one of two functions - createCar or changeCarOwner. First, let’s create a red Chevy Volt and give it to an owner named Nick. We’re up to CAR9 on our ledger, so we’ll use CAR10 as the identifying key here. Edit this code block to look like this:

你会看到我们调用了``createCar`` 或者 changeCarOwner``中的一个。首先,我们新建一个红色的Chevy Volt,拥有者为Nick。我们账本最大是``CAR9,所以我们用``CAR10``作为当前键。编辑这段代码后将如下:

var request = {
  //targets: let default to the peer assigned to the client
  chaincodeId: 'fabcar',
  fcn: 'createCar',
  args: ['CAR10', 'Chevy', 'Volt', 'Red', 'Nick'],
  chainId: 'mychannel',
  txId: tx_id
};

Save it and run the program:

保存后再次运行程序:

node invoke.js

There will be some output in the terminal about ProposalResponse and promises. However, all we’re concerned with is this message:

终端将会有一些关于``ProposalResponse``的输出结果和响应。但是,我们更关心的下面这条信息:

The transaction has been committed on peer localhost:7053

To see that this transaction has been written, go back to query.js and change the argument from CAR4 to CAR10.

为了看这条交易如何被写入,回到``query.js``,将参数``CAR4`` 修改为 CAR10

In other words, change this:

也就是修改下面的代码:

const request = {
  //targets : --- letting this default to the peers assigned to the channel
  chaincodeId: 'fabcar',
  fcn: 'queryCar',
  args: ['CAR4']
};

To this:

修改为:

const request = {
  //targets : --- letting this default to the peers assigned to the channel
  chaincodeId: 'fabcar',
  fcn: 'queryCar',
  args: ['CAR10']
};

Save once again, then query:

保存后运行查询:

node query.js

Which should return this:

将得到如下返回结果:

Response is  {"colour":"Red","make":"Chevy","model":"Volt","owner":"Nick"}

Congratulations. You’ve created a car!

恭喜!你已经新添加了一辆汽车汽车!

So now that we’ve done that, let’s say that Nick is feeling generous and he wants to give his Chevy Volt to someone named Dave.

现在我们已经完成了这个步骤。加入Nick非常慷慨,他想赠送他的Chevy Volt给Dave。

To do this go back to invoke.js and change the function from createCar to changeCarOwner and input the arguments like this:

要完成这件事我们先回到``invoke.js``,修改``createCar``方法为``changeCarOwner``,方法参数如下:

var request = {
  //targets: let default to the peer assigned to the client
  chaincodeId: 'fabcar',
  fcn: 'changeCarOwner',
  args: ['CAR10', 'Dave'],
  chainId: 'mychannel',
  txId: tx_id
};

The first argument – CAR10 – reflects the car that will be changing owners. The second argument – Dave – defines the new owner of the car.

第一个参数是 – CAR10 – 说明了哪一俩汽车将会被变更所有者,第二个参数 – Dave – 定义了汽车的新的拥有者。

Save and execute the program again:

保存后运行程序:

node invoke.js

Now let’s query the ledger again and ensure that Dave is now associated with the CAR10 key:

现在我们再来查询账本来确保Dave现在和键``CAR10``关联起来了:

node query.js

It should return this result:

它将返回如下结果:

Response is  {"colour":"Red","make":"Chevy","model":"Volt","owner":"Dave"}

The ownership of CAR10 has been changed from Nick to Dave.

``CAR10``的拥有者已经由Nick变更为Dave。

注解

In a real world application the chaincode would likely have some access control logic. For example, only certain authorized users may create new cars, and only the car owner may transfer the car to somebody else.

注解

在真实世界的应用中,链码会有一些访问权限控制逻辑。比如,只有授权的用户可以创建新的汽车资产,只有汽车的拥有者可以转移汽车资产给其他人。

Summary - 总结

Now that we’ve done a few queries and a few updates, you should have a pretty good sense of how applications interact with the network. You’ve seen the basics of the roles smart contracts, APIs, and the SDK play in queries and updates and you should have a feel for how different kinds of applications could be used to perform other business tasks and operations.

现在我们已经进行了一下查询和更新操作,你应该对应用如何和网络交互有了全新的认识。看到了智能合约的基本角色,API和SDK在查询和更新的使用,你应该对不同种类的应用如何进行其他商业任务和操作有了新的认识。

In subsequent documents we’ll learn how to actually write a smart contract and how some of these more low level application functions can be leveraged (especially relating to identity and membership services).

在接下来的文档我们将会学习如何真正的**写**一个智能合约,将看到更多的底层的函数和方法如何被调用(特别是一些涉及到身份认证和成员服务的方法)。

Additional Resources - 其他资源

The Hyperledger Fabric Node SDK repo is an excellent resource for deeper documentation and sample code. You can also consult the Fabric community and component experts on Hyperledger Rocket Chat.

Adding an Org to a Channel – 向通道添加组织

注解

Ensure that you have downloaded the appropriate images and binaries as outlined in Hyperledger Fabric 示例 and 预备知识 that conform to the version of this documentation (which can be found at the bottom of the table of contents to the left). In particular, your version of the fabric-samples folder must include the eyfn.sh (“Extending Your First Network”) script and its related scripts.

确保你已经下载了 Hyperledger Fabric 示例预备知识 中所罗列的和本文档版本(在左边内 容列表的底部可以查看)一致的镜像和二进制。特别注意,在你的版本中,fabric-samples 文件夹必须包含 eyfn.sh (“Extending Your First Network”)脚本和它相关的脚本。

This tutorial serves as an extension to the Building Your First Network - 构建你的第一个网络 (BYFN) tutorial, and will demonstrate the addition of a new organization – Org3 – to the application channel (mychannel) autogenerated by BYFN. It assumes a strong understanding of BYFN, including the usage and functionality of the aforementioned utilities.

本指南是 Building Your First Network - 构建你的第一个网络 (BYFN) 指南的扩展,将演示一个由 BYFN自动生成的新的组织– Org3 – 加入到应用通道 mychannel 的过程。本篇指南假设你对 BYFN有很好地理解,包括用法以及上面提及的实用工具的功能。

While we will focus solely on the integration of a new organization here, the same approach can be adopted when performing other channel configuration updates (updating modification policies or altering batch size, for example). To learn more about the process and possibilities of channel config updates in general, check out Updating a Channel Configuration). It’s also worth noting that channel configuration updates like the one demonstrated here will usually be the responsibility of an organization admin (rather than a chaincode or application developer).

虽然我们这里将只关注新组织的集成,但执行其他通道配置更新(如更新修改策略,调整批处理 大小)也可以采取相同的方式。查看 Updating a Channel Configuration 来了解更多的处理细节、 通道配置更新的其他可能通用场景。还有一点值得注意,像本文演示的这些通道配置更新通常是 组织管理者(而非链码或者应用开发者)的职责。

注解

Make sure the automated byfn.sh script runs without error on your machine before continuing. If you have exported your binaries and the related tools (cryptogen, configtxgen, etc) into your PATH variable, you’ll be able to modify the commands accordingly without passing the fully qualified path.

在继续本文前先确保自动化脚本 byfn.sh 运行无误。如果你已经把你的二进制和相关的 工具(如 cryptogenconfigtxgen)放在了PATH变量指定的路径下,你可以相应地修改命令 而不使用全量路径。

Setup the Environment – 环境构建

We will be operating from the root of the first-network subdirectory within your local clone of fabric-samples. Change into that directory now. You will also want to open a few extra terminals for ease of use.

我们后续的操作都在 fabric-samples 项目本地副本的 first-network 子目录下进行。现在切换 到该目录下。你同时需要打个几个额外的终端,以便于于操作。

First, use the byfn.sh script to tidy up. This command will kill any active or stale docker containers and remove previously generated artifacts. It is by no means necessary to bring down a Fabric network in order to perform channel configuration update tasks. However, for the sake of this tutorial, we want to operate from a known initial state. Therefore let’s run the following command to clean up any previous environments:

首先,使用 byfn.sh 脚本清理环境。这个命令会清除运行、终止状态的容器,并且移除之前构建 的部件等。移除Fabric网络并非执行通道配置升级的必要步骤。但是为了便于这个指南的书写, 我们希望从一个已知的初始状态开始,因此让我们运行以下命令来清理之前的环境:

./byfn.sh -m down

Now generate the default BYFN artifacts:

现在生成默认的BYFN部件:

./byfn.sh -m generate

And launch the network making use of the scripted execution within the CLI container:

然后通过执行CLI容器内的脚本来构建网络:

./byfn.sh -m up

Now that you have a clean version of BYFN running on your machine, you have two different paths you can pursue. First, we offer a fully commented script that will carry out a config transaction update to bring Org3 into the network.

现在你的机器上运行着一个干净的BYFN版本,你有两种不同的方式可选。第一种,我们提供了一 个通过实施配置交易更新来将Org3添加到网络中的全量注释的脚本。

Also, we will show a “manual” version of the same process, showing each step and explaining what it accomplishes (since we show you how to bring down your network before this manual process, you could also run the script and then look at each step).

我们也提供同样过程的手动版本,演示并说明每一个步骤的作用(因为我们刚演示了在继续手动 操作前如何移除你的网络,你可以先运行那个脚本,然后再来看每个步骤)。

Bring Org3 into the Channel with the Script - 使用脚本向通道加入Org3

You should be in first-network. To use the script, simply issue the following:

first-network 目录下,简单地执行以下命令来使用脚本:

./eyfn.sh up

The output here is well worth reading. You’ll see the Org3 crypto material being added, the config update being created and signed, and then chaincode being installed to allow Org3 to execute ledger queries.

此处的脚本输出值得一读。你可以看到添加了Org3的加密材料,配置更新被创建、签名,之后链 码被安装,Org3也被允许执行账本查询。

If everything goes well, you’ll get this message:

如果诸事顺利,你会看到以下信息:

========= All GOOD, EYFN test execution completed ===========

eyfn.sh can be used with the same Node.js chaincode and database options as byfn.sh by issuing the following (instead of ./byfn.sh -m -up):

eyfn.sh 可以使用和 byfn.sh 一样的Node.js链码和数据库选项,如下所示(替代 ./byfn.sh -m -up):

./byfn.sh up -c testchannel -s couchdb -l node

And then:

然后:

./eyfn.sh up -c testchannel -s couchdb -l node

For those who want to take a closer look at this process, the rest of the doc will show you each command for making a channel update and what it does.

对于想要详细了解该过程的人,文档的剩余部分会为你展示通道升级的每个命令,以及命令的作 用。

Bring Org3 into the Channel Manually – 向通道手动添加Org3

注解

The manual steps outlined below assume that the CORE_LOGGING_LEVEL in the cli and Org3cli` containers is set to DEBUG.

下面的步骤均假设 CORE_LOGGING_LEVEL 变量在 cliOrg3cli 容器中设置为 DEBUG

For the cli container, you can set this by modifying the docker-compose-cli.yaml file in the first-network directory. e.g.

你可以通过修改 first-network 目录下的 docker-compose-cli.yaml 文件来配置 cli 容器。 例:

cli:
  container_name: cli
  image: hyperledger/fabric-tools:$IMAGE_TAG
  tty: true
  stdin_open: true
  environment:
    - GOPATH=/opt/gopath
    - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
    #- CORE_LOGGING_LEVEL=INFO
    - CORE_LOGGING_LEVEL=DEBUG

For the Org3cli container, you can set this by modifying the docker-compose-org3.yaml file in the first-network directory. e.g.

你可以通过修改 first-network 目录下的 docker-compose-org3.yaml 文件来配置 Org3cli 容 器。例:

Org3cli:
  container_name: Org3cli
  image: hyperledger/fabric-tools:$IMAGE_TAG
  tty: true
  stdin_open: true
  environment:
    - GOPATH=/opt/gopath
    - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
    #- CORE_LOGGING_LEVEL=INFO
    - CORE_LOGGING_LEVEL=DEBUG

If you’ve used the eyfn.sh script, you’ll need to bring your network down. This can be done by issuing:

如果你已经使用了 eyfn.sh 脚本,你需要先移除你的网络。通过如下所示命令来完成:

./eyfn.sh down

This will bring down the network, delete all the containers and undo what we’ve done to add Org3.

When the network is down, bring it back up again.

这会移除网络,删除所有的容器,并且撤销我们添加Org3的操作。

当网络移除后,再次将它建立起来。

./byfn.sh -m generate

Then:

然后:

./byfn.sh -m up

This will bring your network back to the same state it was in before you executed the eyfn.sh script.

这会将你的网络恢复到你执行 eyfn.sh 脚本之前的状态。

Now we’re ready to add Org3 manually. As a first step, we’ll need to generate Org3’s crypto material.

现在我们可以手动添加Org3。第一步,我们需要生成Org3的加密材料。

Generate the Org3 Crypto Material – 生成Org3加密材料

In another terminal, change into the org3-artifacts subdirectory from first-network.

在另一个终端,切换到 first-network 的子目录 org3-artifacts 中。

cd org3-artifacts

There are two yaml files of interest here: org3-crypto.yaml and configtx.yaml. First, generate the crypto material for Org3:

这里需要关注两个 yaml 文件:org3-crypto.yamlconfigtx.yaml。首先,生成Org3的加密 材料:

../../bin/cryptogen generate --config=./org3-crypto.yaml

This command reads in our new crypto yaml file – org3-crypto.yaml – and leverages cryptogen to generate the keys and certificates for an Org3 CA as well as two peers bound to this new Org. As with the BYFN implementation, this crypto material is put into a newly generated crypto-config folder within the present working directory (in our case, org3-artifacts).

该命令读取我们新的加密 yaml 文件 – org3-crypto.yaml – 然后调用 cryptogen 来为Org3 CA 和其他两个绑定到这个组织的peers生成秘钥和证书。如同BYFN实现,加密材料放到最近生成的 crypto-config 文件夹下,均在当前工作路径下(在我们例子中是 org3-artifacts)。

Now use the configtxgen utility to print out the Org3-specific configuration material in JSON. We will preface the command by telling the tool to look in the current directory for the configtx.yaml file that it needs to ingest.

现在使用 configtxgen 工具打印出Org3对应的配置材料,用JSON格式展示。我们将在执行命令 前,告诉这个工具去获取当前目录的 configtx.yaml 文件。

export FABRIC_CFG_PATH=$PWD && ../../bin/configtxgen -printOrg Org3MSP > ../channel-artifacts/org3.json

The above command creates a JSON file – org3.json – and outputs it into the channel-artifacts subdirectory at the root of first-network. This file contains the policy definitions for Org3, as well as three important certificates presented in base 64 format: the admin user certificate (which will be needed to act as the admin of Org3 later on), a CA root cert, and a TLS root cert. In an upcoming step we will append this JSON file to the channel configuration.

上面的命令会创建一个JSON文件 – org3.json – 并把文件输出到 first-networkchannel-artifacts 子目录下。这个文件包含了Org3的策略定义,还有base 64编码的重要的证 书:管理员用户证书(之后作为Org3的管理员角色),一个根证书,一个TLS根证书。之后的步 骤我们会用这个JSON文件去扩展通道配置。

Our final piece of housekeeping is to port the Orderer Org’s MSP material into the Org3 crypto-config directory. In particular, we are concerned with the Orderer’s TLS root cert, which will allow for secure communication between Org3 entities and the network’s ordering node.

我们最后的例行工作是拷贝排序Org的MSP材料到Org3的 crypto-config 目录下。我们尤其关注 排序服务的TLS根证书,它可以用于Org3的通信实体和网络的排序节点间的安全通信。

cd ../ && cp -r crypto-config/ordererOrganizations org3-artifacts/crypto-config/

Now we’re ready to update the channel configuration...

现在我们准备开始升级通道配置。

Prepare the CLI Environment 准备CLI环境

The update process makes use of the configuration translator tool – configtxlator. This tool provides a stateless REST API independent of the SDK. Additionally it provides a CLI, to simplify configuration tasks in Fabric networks. The tool allows for the easy conversion between different equivalent data representations/formats (in this case, between protobufs and JSON). Additionally, the tool can compute a configuration update transaction based on the differences between two channel configurations.

更新的步骤需要用到配置转化工具 – configtxlator,这个工具提供了和SDK无关的无状态REST API。 它还额外提供了CLI,用于简化Fabric网络中的配置任务。这个工具提供不同的数据表示/格 式间进行转化的便利功能(在这个例子中就是protobufs和JSON格式的互转)。另外,这个工具 能基于两个不同的通道配置计算出配置更新交易。

First, exec into the CLI container. Recall that this container has been mounted with the BYFN crypto-config library, giving us access to the MSP material for the two original peer organizations and the Orderer Org. The bootstrapped identity is the Org1 admin user, meaning that any steps where we want to act as Org2 will require the export of MSP-specific environment variables.

首先,进入到CLI容器。这个容器挂载了BYFN crypto-config 库,允许我们访问两个原始的peer节点组织和 Orderer排序组织。默认的身份是Org1的管理员用户,所以如果我们想作为Org2进行任何操作,需要设置和MSP相关的环境变量。

docker exec -it cli bash

Now install the jq tool into the container. This tool allows script interactions with JSON files returned by the configtxlator tool:

现在在容器里安装 jq 工具。这个工具可以解析 configtxlator 工具返回的JSON文件。

apt update && apt install -y jq

Export the ORDERER_CA and CHANNEL_NAME variables:

Export ORDERER_CACHANNEL_NAME 变量。

export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem  && export CHANNEL_NAME=mychannel

Check to make sure the variables have been properly set:

检查并确保环境变量已合理设置:

echo $ORDERER_CA && echo $CHANNEL_NAME

注解

If for any reason you need to restart the CLI container, you will also need to re-export the two environment variables – ORDERER_CA and CHANNEL_NAME. The jq installation will persist. You need not install it a second time.

如果因为什么原因需要重启CLI容器,你会需要重新设置 ORDERER_CACHANNEL_NAME 这两个 环境变量。jq安装会持久化,你不需要再次安装它。

Fetch the Configuration 获取配置

Now we have a CLI container with our two key environment variables – ORDERER_CA and CHANNEL_NAME exported. Let’s go fetch the most recent config block for the channel – mychannel.

现在我们有了一个设置了 ORDERER_CACHANNEL_NAME 环境变量的CLI容器。让我们获取通道 mychannel 的最近的配置区块。

The reason why we have to pull the latest version of the config is because channel config elements are versioned.. Versioning is important for several reasons. It prevents config changes from being repeated or replayed (for instance, reverting to a channel config with old CRLs would represent a security risk). Also it helps ensure concurrency (if you want to remove an Org from your channel, for example, after a new Org has been added, versioning will help prevent you from removing both Orgs, instead of just the Org you want to remove).

我们必须拉取最新版本配置的原因是通道配置元素是版本化的。版本管理由于一些原因显得很重 要。它可以防止通道配置更新被重复或者重放攻击(例如,回退到带有旧的CRLs的通道配置将会 产生安全风险)。同时它保证了并行性(例如如果你想从你的通道中添加新的Org后移除一个 Org,版本管理可以帮助你移除想移除的那个Org,防止移除两个Orgs)。

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CHANNEL_NAME --tls --cafile $ORDERER_CA

This command saves the binary protobuf channel configuration block to config_block.pb. Note that the choice of name and file extension is arbitrary. However, following a convention which identifies both the type of object being represented and its encoding (protobuf or JSON) is recommended.

这个命令将通道配置区块以二进制protobuf形式保存在 config_block.pb。注意文件的名字和扩 展名可以任意指定。然而,我们建议之后根据区块存储对象的类型和编码格式(protobuf或 JSON)进行转换。

When you issued the peer channel fetch command, there was a decent amount of output in the terminal. The last line in the logs is of interest:

当你执行 peer channel fetch 命令后,在终端上会有相当数量的打印输出。日志的最后一行比较 有意思:

2017-11-07 17:17:57.383 UTC [channelCmd] readBlock -> DEBU 011 Received block: 2

This is telling us that the most recent configuration block for mychannel is actually block 2, NOT the genesis block. By default, the peer channel fetch config command returns the most recent configuration block for the targeted channel, which in this case is the third block. This is because the BYFN script defined anchor peers for our two organizations – Org1 and Org2 – in two separate channel update transactions.

这是告诉我们最近的 mychannel 的配置区块实际上是区块2, 初始区块。 peer channel fetch config 命令默认返回目标通道最新的配置区块,在这个例子里是第三个区 块。这是因为BYFN脚本分别在两个不同通道更新交易中为两个组织– Org1Org2 定义了锚节 点。

As a result, we have the following configuration sequence:

那么,我们有如下的配置块序列:

  • block 0: genesis block
  • block 1: Org1 anchor peer update
  • block 2: Org2 anchor peer update

Convert the Configuration to JSON and Trim It Down – 转换配置到JSON格式并裁剪

Now we will make use of the configtxlator tool to decode this channel configuration block into JSON format (which can be read and modified by humans). We also must strip away all of the headers, metadata, creator signatures, and so on that are irrelevant to the change we want to make. We accomplish this by means of the jq tool:

现在我们是用 configtxlator 的工具将这个通道配置转换为JSON格式(以便友好地被阅读和修 改)。我们也必须裁剪所有的头部,元数据,创建者签名等这些和我们即将做的无关的内容。我 们通过 jq 这个工具来实现:

configtxlator proto_decode --input config_block.pb --type common.Block | jq .data.data[0].payload.data.config > config.json

This leaves us with a trimmed down JSON object – config.json, located in the fabric-samples folder inside first-network – which will serve as the baseline for our config update.

我们得到一个裁剪后的JSON对象 – config.json ,放置在 fabric-samples 下的 first-network 文件夹下 – first-network 是我们配置更新的基准工作目录。

Take a moment to open this file inside your text editor of choice (or in your browser). Even after you’re done with this tutorial, it will be worth studying it as it reveals the underlying configuration structure and the other kind of channel updates that can be made. We discuss them in more detail in Updating a Channel Configuration.

花一些时间用你的text编辑器(或者你的浏览器)打开这个文件。即使你已经完成了这个指南, 也值得研究下它,因为它揭示了底层配置结构,和能做的其它类型的通道更新升级。我们将在 Updating a Channel Configuration 更详细地讨论。

Add the Org3 Crypto Material – 添加Org3加密材料

注解

The steps you’ve taken up to this point will be nearly identical no matter what kind of config update you’re trying to make. We’ve chosen to add an org with this tutorial because it’s one of the most complex channel configuration updates you can attempt.

目前到这里你做的步骤和其他任何类型的配置升级所需步骤几乎是一致的。我们之所以选择添 加一个org这样的指南,是因为这是能做的配置升级里最复杂的一个。

We’ll use the jq tool once more to append the Org3 configuration definition – org3.json – to the channel’s application groups field, and name the output – modified_config.json.

我们将再次使用 jq 工具去扩展Org3的配置定义 – org3.json – 对应到通道的应用组属性,同时定义输出文件是 – modified_config.json

jq -s '.[0] * {"channel_group":{"groups":{"Application":{"groups": {"Org3MSP":.[1]}}}}}' config.json ./channel-artifacts/org3.json > modified_config.json

Now, within the CLI container we have two JSON files of interest – config.json and modified_config.json. The initial file contains only Org1 and Org2 material, whereas “modified” file contains all three Orgs. At this point it’s simply a matter of re-encoding these two JSON files and calculating the delta.

现在,我们在CLI容器有两个重要的JSON文件 – config.jsonmodified_config.json。初始 的文件包含Org1和Org2的材料,而”modified”文件包含了总共三个Orgs。现在只需要将这两个 JSON文件重新编码并计算出差异部分。

First, translate config.json back into a protobuf called config.pb:

首先,将 config.json 文件倒回到protobuf格式,命名为 config.pb

configtxlator proto_encode --input config.json --type common.Config --output config.pb

Next, encode modified_config.json to modified_config.pb:

下一步,将 modified_config.json 编码成 modified_config.pb:

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

Now use configtxlator to calculate the delta between these two config protobufs. This command will output a new protobuf binary named org3_update.pb:

现在使用 configtxlator 去计算两个protobuf配置的差异。这条命令会输出一个新的protobuf二 进制文件,命名为 org3_update.pb

configtxlator compute_update --channel_id $CHANNEL_NAME --original config.pb --updated modified_config.pb --output org3_update.pb

This new proto – org3_update.pb – contains the Org3 definitions and high level pointers to the Org1 and Org2 material. We are able to forgo the extensive MSP material and modification policy information for Org1 and Org2 because this data is already present within the channel’s genesis block. As such, we only need the delta between the two configurations.

这个新的proto文件 – org3_update.pb – 包含了Org3定义和指向Org1和Org2材料的更高级别的指 针。我们可以抛弃Org1和Org2相关的MSP材料和修改策略信息,因为这些数据已经存在于通道 的初始区块。因此,我们只需要两个配置的差异部分。

Before submitting the channel update, we need to perform a few final steps. First, let’s decode this object into editable JSON format and call it org3_update.json:

在我们提交通道更新前,我们最后做几个操作。首先,我们将这个对象解码成可编辑的JSON格 式,并命名为 org3_update.json

configtxlator proto_decode --input org3_update.pb --type common.ConfigUpdate | jq . > org3_update.json

Now, we have a decoded update file – org3_update.json – that we need to wrap in an envelope message. This step will give us back the header field that we stripped away earlier. We’ll name this file org3_update_in_envelope.json:

现在,我们有了一个解码后的更新文件 – org3_update.json – 我们需要用信封消息来包装它。这 个步骤要把之前裁剪掉的头部信息还原回来。我们将命名这个新文件为 org3_update_in_envelope.json

echo '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":'$(cat org3_update.json)'}}}' | jq . > org3_update_in_envelope.json

Using our properly formed JSON – org3_update_in_envelope.json – we will leverage the configtxlator tool one last time and convert it into the fully fledged protobuf format that Fabric requires. We’ll name our final update object org3_update_in_envelope.pb:

使用我们格式化好的JSON – org3_update_in_envelope.json – 我们最后一次使用 configtxlator 工具将他转换为fabric需要的完整独立的protobuf格式。我们将最后的更新对象命名为 org3_update_in_envelope.pb

configtxlator proto_encode --input org3_update_in_envelope.json --type common.Envelope --output org3_update_in_envelope.pb

Sign and Submit the Config Update – 签名并提交配置更新

Almost done!

We now have a protobuf binary – org3_update_in_envelope.pb – within our CLI container. However, we need signatures from the requisite Admin users before the config can be written to the ledger. The modification policy (mod_policy) for our channel Application group is set to the default of “MAJORITY”, which means that we need a majority of existing org admins to sign it. Because we have only two orgs – Org1 and Org2 – and the majority of two is two, we need both of them to sign. Without both signatures, the ordering service will reject the transaction for failing to fulfill the policy.

差不多大功告成了!

我们现在有一个protobuf二进制 – org3_update_in_envelope.pb – 在我们的CLI容器内。但是,在 配置写入到账本前,我们需要必要的Admin用户的签名。我们通道应用组的修改策略(mod_policy)设置 为默认值”MAJORITY”,因此意味着我们需要大多数已经存在的组织管理员去签名这个更 新。因为我们只有两个组织 – Org1和Org2 – 也即两个的大多数也还是两个,我们需要它们都签 名。没有这两个签名,排序服务会以不满足策略为由拒绝这个交易。

First, let’s sign this update proto as the Org1 Admin. Remember that the CLI container is bootstrapped with the Org1 MSP material, so we simply need to issue the peer channel signconfigtx command:

首先,让我们以Org1管理员升级来签名这个更新proto。因为CLI容器是以Org1 MSP材料启动 的,我们只需要简单执行 peer channel signconfigtx 命令:

peer channel signconfigtx -f org3_update_in_envelope.pb

The final step is to switch the CLI container’s identity to reflect the Org2 Admin user. We do this by exporting four environment variables specific to the Org2 MSP.

最后一步,我们将CLI容器的身份切换为Org2管理员。为此,我们通过export和Org2 MSP相关的 四个环境变量。

注解

Switching between organizations to sign a config transaction (or to do anything else) is not reflective of a real-world Fabric operation. A single container would never be mounted with an entire network’s crypto material. Rather, the config update would need to be securely passed out-of-band to an Org2 Admin for inspection and approval.

切换不同的组织身份为配置交易签名(或者其他事情)不能反映真实世界里Fabric的操作。一 个单一容器不可能挂载了整个网络的加密材料。相反地,配置更新需要在网络外安全地递交 给Org2管理员来审查和批准。

Export the Org2 environment variables:

Export Org2的环境变量:

# you can issue all of these commands at once

export CORE_PEER_LOCALMSPID="Org2MSP"

export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt

export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp

export CORE_PEER_ADDRESS=peer0.org2.example.com:7051

Lastly, we will issue the peer channel update command. The Org2 Admin signature will be attached to this call so there is no need to manually sign the protobuf a second time:

最后,我们执行 peer channel update 命令。Org2 管理员在这个命令中会附带签名,因此就没有 必要对protobuf进行两次签名。

注解

The upcoming update call to the ordering service will undergo a series of systematic signature and policy checks. As such you may find it useful to stream and inspect the ordering node’s logs. From another shell, issue a docker logs -f orderer.example.com command to display them.

将要做的对排序服务的更新调用,会经历一系列的系统级签名和策略检查。你会发现通过检视 排序节点的日志流会非常有用。在另外一个shell,执行 docker logs -f orderer.example.com 命令就能展示它们了。

Send the update call:

发起更新调用:

peer channel update -f org3_update_in_envelope.pb -c $CHANNEL_NAME -o orderer.example.com:7050 --tls --cafile $ORDERER_CA

You should see a message digest indication similar to the following if your update has been submitted successfully:

如果你的更新提交成功,将会看到一个类似如下的摘要提示信息:

2018-02-24 18:56:33.499 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: 3207B24E40DE2FAB87A2E42BC004FEAA1E6FDCA42977CB78C64F05A88E556ABA

You will also see the submission of our configuration transaction:

你也会看到我们的配置交易的提交:

2018-02-24 18:56:33.499 UTC [channelCmd] update -> INFO 010 Successfully submitted channel update

The successful channel update call returns a new block – block 5 – to all of the peers on the channel. If you remember, blocks 0-2 are the initial channel configurations while blocks 3 and 4 are the instantiation and invocation of the mycc chaincode. As such, block 5 serves as the most recent channel configuration with Org3 now defined on the channel.

成功的通道更新调用返回一个新的区块 – 区块5 – 给所有在这个通道上的peers节点。如果你还记 得,区块0-2是初始的通道配置,而区块3和4是链码 mycc 的实例化和调用。至此,区块5就是带 有Org3定义的最新的通道配置。

Inspect the logs for peer0.org1.example.com:

查看 peer0.org1.example.com 的日志:

docker logs -f peer0.org1.example.com

Follow the demonstrated process to fetch and decode the new config block if you wish to inspect its contents.

如果你想查看新的配置区块的内容,可以跟着示范的过程获取和解码配置区块。

Configuring Leader Election – 配置领导选举

注解

This section is included as a general reference for understanding the leader election settings when adding organizations to a network after the initial channel configuration has completed. This sample defaults to dynamic leader election, which is set for all peers in the network in peer-base.yaml.

这个章节之所以引入,是用于理解在网络通道配置初始化之后,网络中加入组织时,领导选举 设置的作用。这个例子中,默认设置为动态领导选举,在 peer-base.yaml 文件中为所有的 peers节点进行了设置。

Newly joining peers are bootstrapped with the genesis block, which does not contain information about the organization that is being added in the channel configuration update. Therefore new peers are not able to utilize gossip as they cannot verify blocks forwarded by other peers from their own organization until they get the configuration transaction which added the organization to the channel. Newly added peers must therefore have one of the following configurations so that they receive blocks from the ordering service:

新加入的peer节点是根据初始区块启动,初始区块是不包含通道配置更新中新加入的组织信息 的。因此新的peer节点就无法利用gossip协议,因为它们无法验证从自己组织里其他peer节点发 送过来的区块,直到它们接收到加入组织到通道的那个配置交易。因此新加入的节点必须有以下 的一个配置来保证能从排序服务接收区块:

1. To utilize static leader mode, configure the peer to be an organization leader:

  1. 采用静态领导者模式,将peer节点配置为组织的领导者。
CORE_PEER_GOSSIP_USELEADERELECTION=false
CORE_PEER_GOSSIP_ORGLEADER=true

注解

This configuration must be the same for all new peers added to the channel.

这个配置对于新加入到通道中的所有peer节点必须一致。

2. To utilize dynamic leader election, configure the peer to use leader election:

  1. 采用动态领导者选举,配置peer节点采用领导选举:
CORE_PEER_GOSSIP_USELEADERELECTION=true
CORE_PEER_GOSSIP_ORGLEADER=false

注解

Because peers of the newly added organization won’t be able to form membership view, this option will be similar to the static configuration, as each peer will start proclaiming itself to be a leader. However, once they get updated with the configuration transaction that adds the organization to the channel, there will be only one active leader for the organization. Therefore, it is recommended to leverage this option if you eventually want the organization’s peers to utilize leader election.

因为新加入组织的peer节点,无法生成成员关系视图,这个配置和静态配置类似,每个节点启 动时宣称自己是领导者。但是,一旦它们更新到了将组织加入到通道的配置交易,组织中将 只会有一个激活状态的领导者。因此,如果你想最终组织的节点采用领导选举,建议你采用 这个配置。

Join Org3 to the Channel – 向通道添加Org3

At this point, the channel configuration has been updated to include our new organization – Org3 – meaning that peers attached to it can now join mychannel.

此时,通道的配置已经更新并包含了我们新的组织 – Org3 – 意味者这个组织下的peer节点可以加入 到 mychannel

First, let’s launch the containers for the Org3 peers and an Org3-specific CLI.

首先,让我们部署Org3 peer节点容器和Org3-specific CLI容器。

Open a new terminal and from first-network kick off the Org3 docker compose:

打开一个以 first-network 为工作目录的新的终端,开始Org3 docker compose的部署:

docker-compose -f docker-compose-org3.yaml up -d

This new compose file has been configured to bridge across our initial network, so the two peers and the CLI container will be able to resolve with the existing peers and ordering node. With the three new containers now running, exec into the Org3-specific CLI container:

这个新的compose文件配置为桥接我们的初始网络,因此两个peer容器和CLI容器可以融入到已经 存在的peer和排序节点中。当三个容器运行后,进入Org3-specific CLI容器:

docker exec -it Org3cli bash

Just as we did with the initial CLI container, export the two key environment variables: ORDERER_CA and CHANNEL_NAME:

和我们之前在初始CLI容器一样,export两个关键环境变量: ORDERER_CACHANNEL_NAME

export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem && export CHANNEL_NAME=mychannel

Check to make sure the variables have been properly set:

检查确保环境变量已经合理设置:

echo $ORDERER_CA && echo $CHANNEL_NAME

Now let’s send a call to the ordering service asking for the genesis block of mychannel. The ordering service is able to verify the Org3 signature attached to this call as a result of our successful channel update. If Org3 has not been successfully appended to the channel config, the ordering service should reject this request.

现在,我们向排序服务发送一个请求 mychannel 的初始区块的请求。如果通道更新成功执行,排 序服务会成功校验这个请求中Org3的签名。如果Org3没有成功地添加到通道配置中,排序服务会 拒绝这个请求。

注解

Again, you may find it useful to stream the ordering node’s logs to reveal the sign/verify logic and policy checks.

再次,你会发现打印排序节点的日志流可以帮助揭示签名/校验以及策略校验的逻辑。

Use the peer channel fetch command to retrieve this block:

使用 peer channel fatch 命令来检索这个区块:

peer channel fetch 0 mychannel.block -o orderer.example.com:7050 -c $CHANNEL_NAME --tls --cafile $ORDERER_CA

Notice, that we are passing a 0 to indicate that we want the first block on the channel’s ledger (i.e. the genesis block). If we simply passed the peer channel fetch config command, then we would have received block 5 – the updated config with Org3 defined. However, we can’t begin our ledger with a downstream block – we must start with block 0.

注意,我们传递了 0 去索引我们在这个通道账本上想要的区块(例如,初始区块)。如果我们 简单地执行 peer channel fetch config 命令,我们将会收到区块5 – 那个带有Org3定义的更新 后的配置。然而,我们的账本不能从一个下游的区块开始 – 我们必须从区块0开始。

Issue the peer channel join command and pass in the genesis block – mychannel.block:

执行 peer channel join 命令并指定初始区块 – mychannel.block:

peer channel join -b mychannel.block

If you want to join the second peer for Org3, export the TLS and ADDRESS variables and reissue the peer channel join command:

如果你想将第二个peer节点加入到Org3中,export TLSADDRESS 变量,再重新执行 peer channel join command

export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer1.org3.example.com/tls/ca.crt && export CORE_PEER_ADDRESS=peer1.org3.example.com:7051

peer channel join -b mychannel.block

pgrade and Invoke Chaincode – 升级和调用链码

The final piece of the puzzle is to increment the chaincode version and update the endorsement policy to include Org3. Since we know that an upgrade is coming, we can forgo the futile exercise of installing version 1 of the chaincode. We are solely concerned with the new version where Org3 will be part of the endorsement policy, therefore we’ll jump directly to version 2 of the chaincode.

这个智力游戏的最后一部分是升级链码的版本,并升级背书策略以加入Org3。因为我们知道马上 要做的是升级,将无关紧要的安装第一个版本链码的过程抛诸脑后吧。我们只关心Org3会是背书 策略一部分的新的版本,因此我们直接跳到链码的第二个版本。

From the Org3 CLI:

从Org3 CLI执行:

peer chaincode install -n mycc -v 2.0 -p github.com/chaincode/chaincode_example02/go/

Modify the environment variables accordingly and reissue the command if you want to install the chaincode on the second peer of Org3. Note that a second installation is not mandated, as you only need to install chaincode on peers that are going to serve as endorsers or otherwise interface with the ledger (i.e. query only). Peers will still run the validation logic and serve as committers without a running chaincode container.

如果你要在Org3的第二个peer节点上安装链码,请相应地修改环境变量并再次执行命令。注意第 二次安装并不是强制的,因为你只需要在背书节点或者和账本有交互行为(如,只做查询)节点上 安装链码。即使没有运行链码容器,Peer节点仍然会运行检验逻辑,作为commiter角色工作。

Now jump back to the original CLI container and install the new version on the Org1 and Org2 peers. We submitted the channel update call with the Org2 admin identity, so the container is still acting on behalf of peer0.org2:

现在跳回 original CLI容器,在Org1和Org2 peer节点上安装新版本链码。我们使用Org2管理员身 份提交通道更新请求,所以容器仍然是代表”peer0.Org2”:

peer chaincode install -n mycc -v 2.0 -p github.com/chaincode/chaincode_example02/go/

Flip to the peer0.org1 identity:

切回 peer0.org1 身份:

export CORE_PEER_LOCALMSPID="Org1MSP"

export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp

export CORE_PEER_ADDRESS=peer0.org1.example.com:7051

And install again:

然后再次安装:

peer chaincode install -n mycc -v 2.0 -p github.com/chaincode/chaincode_example02/go/

Now we’re ready to upgrade the chaincode. There have been no modifications to the underlying source code, we are simply adding Org3 to the endorsement policy for a chaincode – mycc – on mychannel.

现在我们已经准备好升级链码。底层的源代码没有任何变化,我们只是简单为在 mychannel 通道上的 mycc 链码的背书策略添加Org3组织。

注解

Any identity satisfying the chaincode’s instantiation policy can issue the upgrade call. By default, these identities are the channel Admins.

任何满足链码实例化策略的身份都可以执行升级调用。这些身份默认就是通道的管理者。

Send the call:

发送调用:

peer chaincode upgrade -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA -C $CHANNEL_NAME -n mycc -v 2.0 -c '{"Args":["init","a","90","b","210"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer','Org3MSP.peer')"

You can see in the above command that we are specifying our new version by means of the v flag. You can also see that the endorsement policy has been modified to -P "OR ('Org1MSP.peer','Org2MSP.peer','Org3MSP.peer')", reflecting the addition of Org3 to the policy. The final area of interest is our constructor request (specified with the c flag).

你可以看到上面的命令,我们用 v 标志指定了我们的新的版本号。你也能看到背书策略修改为 -P "OR ('Org1MSP.peer','Org2MSP.peer','Org3MSP.peer')" ,说明Org3要被添加到策略中。最后一部分注意的是我们的构造请求(用 c 标志指定)。

As with an instantiate call, a chaincode upgrade requires usage of the init method. If your chaincode requires arguments be passed to the init method, then you will need to do so here.

链码升级和实例化一样需要用到 init 方法。 如果 你的链码需要传递参数给 init 方法,那你 需要在这里添加。

The upgrade call adds a new block – block 6 – to the channel’s ledger and allows for the Org3 peers to execute transactions during the endorsement phase. Hop back to the Org3 CLI container and issue a query for the value of a. This will take a bit of time because a chaincode image needs to be built for the targeted peer, and the container needs to start:

升级调用对于通道的账本添加一个新的区块 – 允许Org3的peer节点在背书阶段执行交易。跳回到 Org3 CLI容器,并执行对 a 值得查询。这需要花费一点时间,因为需要为目标peer构建链码镜 像,链码容器需要运行。

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

We should see a response of Query Result: 90.

我们能看到 Query Result:90 的响应。

Now issue an invocation to move 10 from a to b:

现在执行调用,从 a 转移 10b

peer chaincode invoke -o orderer.example.com:7050  --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA -C $CHANNEL_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}'

Query one final time:

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

We should see a response of Query Result: 80, accurately reflecting the update of this chaincode’s world state.

我们能看到 Query Result: 80 的响应,准确反映了链码的世界状态的更新。

Conclusion – 总结

The channel configuration update process is indeed quite involved, but there is a logical method to the various steps. The endgame is to form a delta transaction object represented in protobuf binary format and then acquire the requisite number of admin signatures such that the channel configuration update transaction fulfills the channel’s modification policy.

通道配置的更新过程是非常复杂的,但是仍然有一个诸多步骤对应的逻辑方法。终局就是为了构 建一个用protobuf二进制表达的差异化的交易对象,然后获取必要数量的管理员签名来满足通道 的修改策略。

The configtxlator and jq tools, along with the ever-growing peer channel commands, provide us with the functionality to accomplish this task.

configtxlatorjq 工具,和不断使用的 peer channel 命令,为我们提供了完成这个任务的基本功能。

Upgrading Your Network Components

升级网络组件

注解

When we use the term “upgrade” in this documentation, we’re primarily referring to changing the version of a component (for example, going from a v1.0.x binary to a v1.1 binary). The term “update,” on the other hand, refers not to versions but to configuration changes, such as updating a channel configuration or a deployment script

注意:当我们在本文档中使用术语“升级”时,我们主要是指更改组件的版本(例如,从v1.0.x二进制文件转换为v1.1二进制文件)。另一方面,术语“更新”不是指版本,而是指配置更改,例如更新信道配置或部署脚本。

Overview

概述

Because the :doc:`build_network` (BYFN) tutorial defaults to the “latest” binaries, if you have run it since the release of v1.1, your machine will have v1.1 binaries and tools installed on it and you will not be able to upgrade them.

因为 :doc:`build_network`(BYFN) 教程默认使用“最新”二进制文件,如果你自v1.1发布以来已经运行它,你的机器上将会已经安装v1.1二进制文件和工具,你就不可能升级它们。

As a result, this tutorial will provide a network based on Hyperledger Fabric v1.0.6 binaries as well as the v1.1 binaries you will be upgrading to. In addition, we will show how to update channel configurations to recognize :doc:`capability_requirements`.

因此,本教程将提供基于Hyperledger Fabric v1.0.6二进制文件的网络以及您要升级到的v1.1二进制文件。 此外,我们将展示如何更新信道配置以识别 :doc:`capability_requirements`

However, because BYFN does not support the following components, our script for upgrading BYFN will not cover them:

但是,由于BYFN不支持以下组件,因此我们用于升级BYFN的脚本不会涵盖它们:

  • Fabric-CA
  • Kafka
  • SDK

The process for upgrading these components will be covered in a section following the tutorial.

升级这些组件的过程将在本教程后面的部分中介绍。

At a high level, our upgrade tutorial will perform the following steps:

在较高级别,我们的升级教程将执行以下步骤:

  1. Back up the ledger and MSPs.
  2. Upgrade the orderer binaries to Fabric v1.1.
  3. Upgrade the peer binaries to Fabric v1.1.
  4. Enable v1.1 channel capability requirements.
  1. 备份分类帐和MSP。
  2. 将背书节点的二进制文件升级到Fabric v1.1。
  3. 将节点的二进制文件升级到Fabric v1.1。
  4. 启用v1.1信道功能要求。

注解

In production environments, the orderers and peers can be upgraded on a rolling basis simultaneously (in other words, you don’t need to upgrade your orderers before upgrading your peers). Where extra care must be taken is in enabling capabilities. All of the orderers and peers must be upgraded before that step (if only some orderers have been upgraded when capabilities have been enabled, a catastrophic state fork can be created).

注意: 在生产环境中,背书节点和节点可以同时滚动升级(换句话说,在升级节点之前,您无需升级背书节点)。必须特别注意的是启用功能。必须在该步骤之前升级所有背书节点和节点(如果在启用功能时仅升级了一些背书节点,则可以创建灾难性的状态分支)。

This tutorial will demonstrate how to perform each of these steps individually with CLI commands.

本教程将演示如何使用CLI命令单独执行每个步骤。

Prerequisites
先决条件

If you haven’t already done so, ensure you have all of the dependencies on your machine as described in :doc:`prereqs`.

如果您还没有这样做,请确保您机器上拥有所有依赖项,如 :doc:`prereqs` 中所述。

Launch a v1.0.6 Network

启动v1.0.6网络

To begin, we will provision a basic network running Fabric v1.0.6 images. This network will consist of two organizations, each maintaining two peer nodes, and a “solo” ordering service.

首先,我们将提供运行Fabric v1.0.6映像的基本网络。 该网络将由两个组织组成,每个组织维护两个节点,以及一个“独立”的背书服务。

We will be operating from the first-network subdirectory within your local clone of fabric-samples. Change into that directory now. You will also want to open a few extra terminals for ease of use.

我们将在您的本地 Fabric-samples 克隆中的 第一个网络 子目录中运行。 立即切换到该目录。 您还需要打开一些额外的终端以方便使用。

Clean up
清理

We want to operate from a known state, so we will use the byfn.sh script to initially tidy up. This command will kill any active or stale docker containers and remove any previously generated artifacts. Run the following command:

我们希望在已知状态下运行,因此我们将使用 byfn.sh 脚本进行初步整理。 此命令将终止所有活动或过时的docker容器,并删除任何以前生成的工件。 运行以下命令:

./byfn.sh -m down
Generate the Crypto and Bring Up the Network
生成Crypto并启动Network

With a clean environment, launch our v1.0.6 BYFN network using these four commands:

以干净的环境使用以下四个命令启动我们的v1.0.6 BYFN网络:

git fetch origin

git checkout v1.0.6

./byfn.sh -m generate

./byfn.sh -m up -t 3000 -i 1.0.6

注解

If you have locally built v1.0.6 images, then they will be used by the example. If you get errors, consider cleaning up v1.0.6 images and running the example again. This will download 1.0.6 images from docker hub.

注意: 如果您已经本地构建v1.0.6映像,则示例将使用它们。如果出现错误,请考虑清理v1.0.6映像并再次运行该示例。 这将从docker hub下载1.0.6图像。

If BYFN has launched properly, you will see:

如果BYFN正确启动,你会看到:

===================== All GOOD, BYFN execution completed =====================

We are now ready to upgrade our network to Hyperledger Fabric v1.1.

我们现在准备将我们的网络升级到Hyperledger Fabric v1.1。

Get the newest samples
获取最新样本

注解

The instructions below pertain to whatever is the most recently published version of v1.1.x, starting with 1.1.0-rc1. Please substitute ‘1.1.x’ with the version identifier of the published release that you are testing. e.g. replace ‘v1.1.x’ with ‘v1.1.0’.

注意: 以下说明适用于最新发布的v1.1.x版本,从1.1.0-rc1开始。请将“1.1.x”替换为您正测试的已发布版本的版本标识符。例如将’v1.1.x’替换为’v1.1.0’。

Before completing the rest of the tutorial, it’s important to get the v1.1.x version of the samples, you can do this by:

在完成本教程的其余部分之前,获取样本的v1.1.x版本非常重要,您可以通过以下方式执行此操作:

git fetch origin

git checkout v1.1.x
Want to upgrade now?
想立即升级吗?

We have a script that will upgrade all of the components in BYFN as well as enabling capabilities. Afterwards, we will walk you through the steps in the script and describe what each piece of code is doing in the upgrade process.

我们有一个脚本可以升级BYFN中的所有组件以及启用功能。 然后,我们将引导您完成脚本中的步骤,并描述每个代码在升级过程中所执行的操作。

To run the script, issue these commands:

要运行该脚本,请发出以下命令:

# Note, replace '1.1.x' with a specific version, for example '1.1.0'.
# Don't pass the image flag '-i 1.1.x' if you prefer to default to 'latest' images.

#注意,将“1.1.x”替换为特定版本,例如“1.1.0”。
#如果您希望默认为“最新”图像,就不要图像标记'-i 1.1.x'。

./byfn.sh upgrade -i 1.1.x

If the upgrade is successful, you should see the following:

如果升级成功,你会看到:

===================== All GOOD, End-2-End UPGRADE Scenario execution completed =====================

if you want to upgrade the network manually, simply run ./byfn.sh -m down again and perform the steps up to – but not including – ./byfn.sh upgrade -i 1.1.x. Then proceed to the next section.

如果你想手动升级网络,只需再次运行 ./byfn.sh -m down`` 并执行以下步骤 - 但不包括 - ``./byfn.sh upgrade -i 1.1.x. 然后继续下一部分。

注解

Many of the commands you’ll run in this section will not result in any output. In general, assume no output is good output.

注意: 您将在本节中运行的许多命令不会产生任何输出。 通常,假设没有输出就是好的输出。

Upgrade the Orderer Containers

升级背书节点容器

注解

Pay CLOSE attention to your orderer upgrades. If they are not done correctly – specifically, if only some orderers are upgraded and not others – a state fork could be created (meaning, ledgers would no longer be consistent). This MUST be avoided.

注意: 请 密切 关注您的背书节点升级。 如果它们没有正确完成 - 特别是,如果只升级了一些背书节点而其他背书节点没有升级 - 可以创建一个状态分支( 意味着,分类账将不再一致)。 这必须是被避免的。

Orderer containers should be upgraded in a rolling fashion (one at a time). At a high level, the orderer upgrade process goes as follows:

背书节点容器(Order Containers)应以滚动方式升级(一次一个)。 在较高级别,背书节点升级过程如下:

  1. Stop the orderer.
  2. Back up the orderer’s ledger and MSP.
  3. Restart the orderer with the latest images.
  4. Verify upgrade completion.
  1. 停止背书节点。
  2. 收回背书节点的分类账和MSP。
  3. 使用最新图像重新打开背书节点。
  4. 验证升级完成。

As a consequence of leveraging BYFN, we have a solo orderer setup, therefore, we will only perform this process once. In a Kafka setup, however, this process will have to be performed for each orderer.

由于使用BYFN,我们有一个独立的背书节点的设置,因此,我们只会执行一次此过程。 但是,在Kafka设置中,必须为每个背书节点执行此过程。

注解

This tutorial uses a docker deployment. For native deployments, replace the file orderer with the one from the release artifacts. Backup the orderer.yaml and replace it with the orderer.yaml file from the release artifacts. Then port any modified variables from the backed up orderer.yaml to the new one. Utilizing a utility like diff may be helpful. To decrease confusion, the variable General.TLS.ClientAuthEnabled has been renamed to General.TLS.ClientAuthRequired (just as it is specified in the peer configuration.). If the old name for this variable is still present in the orderer.yaml file, the new orderer binary will fail to start.

注意: 本教程使用docker部署。对于本机部署,请将文件 orderer`` 替换为发布工件中的。备份 ``orderer.yaml 并将其替换为发布工件``orderer.yaml`` 文件。然后将备份的``orderer.yaml`` 中的任何已经修改的变量移植到新的变量。 使用像 diff 这样的实用程序可能会有所帮助。 为了减少混淆,变量``General.TLS.ClientAuthEnabled`` 已重命名为 General.TLS.ClientAuthRequired``(就像在节点配置中指定的那样)。 如果 ``orderer.yaml 文件中仍存在此变量的旧名称,则新的 orderer 将无法启动。

Let’s begin the upgrade process by bringing down the orderer:

让我们通过 停止背书节点(bringing down the orderer) 来开始升级过程: .. code:: bash

docker stop orderer.example.com

export LEDGERS_BACKUP=./ledgers-backup

# Note, replace ‘1.1.x’ with a specific version, for example ‘1.1.0’. # Set IMAGE_TAG to ‘latest’ if you prefer to default to the images tagged ‘latest’ on your system.

# 注意,将‘1.1.x’替换为特定版本,例如‘1.1.0’。 # 如果您希望默认使用系统上标记为’latest’的图像,请将IMAGE_TAG设置为’latest’。

export IMAGE_TAG=`uname -m`-1.1.x

We have created a variable for a directory to put file backups into, and exported the IMAGE_TAG we’d like to move to.

我们为目录创建了一个变量,用于将文件备份放入,并导出我们想要移动到的 IMAGE_TAG

Once the orderer is down, you’ll want to backup its ledger and MSP:

背书节点停机后,您需要 备份其分类帐和MSP:

mkdir -p $LEDGERS_BACKUP

docker cp orderer.example.com:/var/hyperledger/production/orderer/ ./$LEDGERS_BACKUP/orderer.example.com

In a production network this process would be repeated for each of the Kafka-based orderers in a rolling fashion.

在生产网络中,将以滚动方式为每个基于Kafka的背书节点重复该过程。

Now download and restart the orderer with our new fabric image:

现在使用我们的新结构图像 下载并重新启动背书节点

docker-compose -f docker-compose-cli.yaml up -d --no-deps orderer.example.com

Because our sample uses a “solo” ordering service, there are no other orderers in the network that the restarted orderer must sync up to. However, in a production network leveraging Kafka, it will be a best practice to issue peer channel fetch <blocknumber> after restarting the orderer to verify that it has caught up to the other orderers.

由于我们的示例使用“独立”背书服务,因此重新启动的背书节点必须同步到网络中没有其他背书节点。但是,在利用Kafka的生产网络中,最佳做法是在重新启动背书节点之后发行 peer channel fetch <blocknumber>,以验证它是否已赶上其他背书节点。

Upgrade the Peer Containers

升级节点容器

Next, let’s look at how to upgrade peer containers to Fabric v1.1. Peer containers should,like the orderers, be upgraded in a rolling fashion (one at a time). As mentioned during the orderer upgrade, orderers and peers may be upgraded in parallel, but for the purposes of this tutorial we’ve separated the processes out. At a high level, we will perform the following steps:

接下来,我们来看看如何将节点容器升级到Fabric v1.1。 与背书节点一样,节点容器应以滚动方式升级(一次一个)。 正如背书节点升级期间提到的那样,背书节点和节点可以并行升级,但是为了本教程的目的,我们已经将这些进程分开了。 在较高级别,我们将执行以下步骤:

  1. Stop the peer.
  2. Back up the peer’s ledger and MSP.
  3. Remove chaincode containers and images.
  4. Restart the peer with with latest image.
  5. Verify upgrade completion.
  1. 停止节点
  2. 备份节点的分类账和MSP
  3. 删除链码容器和图像
  4. 用最新图像重新启动节点
  5. 验证升级完成

We have four peers running in our network. We will perform this process once for each peer, totaling four upgrades.

我们的网络中有四个节点。 我们将为每个节点执行一次此过程,总共进行四次升级。

注解

Again, this tutorial utilizes a docker deployment. For native deployments, replace the file peer with the one from the release artifacts. Backup your core.yaml and replace it with the one from the release artifacts. Port any modified variables from the backed up core.yaml to the new one. Utilizing a utility like diff may be helpful.

注意: 同样,本教程使用了docker部署。对于 本机 部署,请将 peer 文件替换为发布工件中的文件。备份您的 core.yaml 并将其替换为发布工件 中的那个。 将备份的``core.yaml`` 中的任何已修改变量移植到新的变量。使用像 diff 这样的实用程序可能会有所帮助。

Let’s bring down the first peer with the following command:

让我们用以下命令 停止第一个节点:

export PEER=peer0.org1.example.com

docker stop $PEER

We can then backup the peer’s ledger and MSP:

然后 备份节点的分类账和MSP

mkdir -p $LEDGERS_BACKUP

docker cp $PEER:/var/hyperledger/production ./$LEDGERS_BACKUP/$PEER

With the peer stopped and the ledger backed up, remove the peer chaincode containers:

在节点停止并备份分类帐后,删除节点链码容器:

CC_CONTAINERS=$(docker ps | grep dev-$PEER | awk '{print $1}')
if [ -n "$CC_CONTAINERS" ] ; then docker rm -f $CC_CONTAINERS ; fi

And the peer chaincode images:

和节点链码图像:

CC_IMAGES=$(docker images | grep dev-$PEER | awk '{print $1}')
if [ -n "$CC_IMAGES" ] ; then docker rmi -f $CC_IMAGES ; fi

Now we’ll re-launch the peer using the v1.1 image tag:

现在我们将使用v1.1图像标记重新启动节点:

docker-compose -f docker-compose-cli.yaml up -d --no-deps $PEER

注解

Although, BYFN supports using CouchDB, we opted for a simpler implementation in this tutorial. If you are using CouchDB, however, follow the instructions in the Upgrading CouchDB section below at this time and then issue this command instead of the one above:

注意: 尽管BYFN支持使用CouchDB,但我们在本教程中选择了更简单的实现。但是,如果您使用的是CouchDB,请按照下面的 “升级CouchDB” 部分中的说明进 行操作,然后发出此命令而不是上面的命令:

docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d --no-deps $PEER

We’ll talk more generally about how to update CouchDB after the tutorial.

我们将在该教程之后更全面地讨论如何更新CouchDB。

Verify Upgrade Completion
验证升级完成

We’ve completed the upgrade for our first peer, but before we move on let’s check to ensure the upgrade has been completed properly with a chaincode invoke. Let’s move 10 from a to b using these commands:

我们已完成第一个节点的升级,但在我们进行之前,请通过调用链代码正确以确保完成了升级。 让我们使用以下命令将 10a 移动到 b

docker-compose -f docker-compose-cli.yaml up -d --no-deps cli

docker exec -it cli bash

peer chaincode invoke -o orderer.example.com:7050  --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem  -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'

Our query earlier revealed a to have a value of 90 and we have just removed 10 with our invoke. Therefore, a query against a should reveal 80. Let’s see:

我们之前的查询显示a值为 90,我们刚刚使用调用移动了 10。 因此,对a的查询应该显示 80.让我们看看:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

We should see the following:

我们会看到:

Query Result: 80

After verifying the peer was upgraded correctly, make sure to issue an exit to leave the container before continuing to upgrade your peers. You can do this by repeating the process above with a different peer name exported.

在验证节点已正确升级后,请确保在继续升级节点之前执行退出以离开容器。 您可以通过重复上述过程并导出不同的节点名称来完成此操作。

export PEER=peer1.org1.example.com
export PEER=peer0.org2.example.com
export PEER=peer1.org2.example.com

注解

All peers must be upgraded BEFORE enabling capabilities.

注意: 在启用功能之前,必须升级所有节点。

Enable Capabilities for the Channels

启用信道功能

Because v1.0.x Fabric binaries do not understand the concept of channel capabilities, extra care must be taken when initially enabling capabilities for a channel.

由于v1.0.x Fabric二进制文件不了解信道功能的概念,因此在初始启用信道功能时必须格外小心。

Although Fabric binaries can and should be upgraded in a rolling fashion, it is critical that the ordering admins not attempt to enable v1.1 capabilities until all orderer binaries are at v1.1.x+. If any orderer is executing v1.0.x code, and capabilities are enabled for a channel, the blockchain will fork as v1.0.x orderers invalidate the change and v1.1.x+ orderers accept it. This is an exception for the v1.0 to v1.1 upgrade. For future upgrades, such as v1.1 to v1.2, the ordering network will handle the upgrade more gracefully and prevent the state fork.

尽管Fabric二进制文件可以并且应该以滚动方式进行升级,但是背书节点管理员不要尝试启用v1.1功能直到所有背书节点二进制文件都是v1.1.x +,这是至关重要的一点。 如果任何背书节点正在执行v1.0.x代码且又为一个信道启用了功能,则区块链将会分叉是由于v1.0.x的背书节点使更改无效并且v1.1.x + 的背书节点接受了它。这是v1.0到v1.1升级的例外。对于未来的升级,例如v1.1到v1.2,背书节点网络将更优雅地处理升级并防止状态分叉。

In order to minimize the chance of a fork, attempts to enable the application or channel v1.1 capabilities before enabling the orderer v1.1 capability will be rejected. Since the orderer v1.1 capability can only be enabled by the ordering admins, making it a prerequisite for the other capabilities prevents application admins from accidentally enabling capabilities before the orderer is ready to support them.

为了最小化分叉的可能性,在启用背书节点v1.1功能之前,尝试启用应用程序或信道v1.1功能将被拒绝。由于背书节点v1.1功能只能由背书节点管理员启用,使其成为其他功能的先决条件,可以防止应用程序管理员在背书节点准备好支持它们之前意外启用功能。

注解

Once a capability has been enabled, disabling it is not recommended or supported.

注意: 一但功能被启用,是不建议或不支持禁用它的。

Once a capability has been enabled, it becomes part of the permanent record for that channel. This means that even after disabling the capability, old binaries will not be able to participate in the channel because they cannot process beyond the block which enabled the capability to get to the block which disables it.

一但功能被启用,它将成为该信道的永久记录的一部分。这意味着即使在禁用该功能之后,旧的二进制文件也将无法参与到该信道中,因为它们无法处理超过启用功能的块进入到禁用的块这个过程。

For this reason, think of enabling channel capabilities as a point of no return. Please experiment with the new capabilities in a test setting and be confident before proceeding to enable them in production.

因此,将信道功能视为不归路。 请在测试设置中尝试新功能,这样就能够在生产中启用它们时充满信心。

Note that enabling capability requirements on a channel which a v1.0.0 peer is joined to will result in a crash of the peer. This crashing behavior is deliberate because it indicates a misconfiguration which might result in a state fork.

请注意,在v1.0.0节点加入的通道上启用功能要求将导致节点崩溃。 这种崩溃行为是故意的,因为它表明这是一个可能导致状态分叉的配置错误。

The error message displayed by failing v1.0.x peers will say:

失败的v1.0.x节点显示的错误消息是:

Cannot commit block to the ledger due to Error validating config which passed
initial validity checks: ConfigEnvelope LastUpdate did not produce the supplied
config result

We will enable capabilities in the following order:

我们将按以下顺序启用功能:

  1. Orderer System Channel
  1. Orderer Group
  2. Channel Group
  1. Individual Channels
  1. Orderer Group
  2. Channel Group
  3. Application Group
  1. 背书节点系统信道
  1. 背书节点组
  2. 信道组
  1. 个人信道
  1. 背书节点组
  2. 信道组
  3. 应用组

注解

In order to minimize the chance of a fork a best practice is to enable the orderer system capability first and then enable individual channel capabilities.

注意: 为了最大限度地减少分叉的可能性,最佳做法是首先启用背书节点系统功能,然后启用个 人信道功能。

For each group, we will enable the capabilities in the following order:

对于每一组,我们将按以下顺序启用这些功能:

  1. Get the latest channel config
  2. Create a modified channel config
  3. Create a config update transaction
  1. 获取最新的信道配置
  2. 创建修改后的信道配置
  3. 创建配置更新事务

注解

This process will be accomplished through a series of config update transactions, one for each channel group. In a real world production network, these channel config updates would be handled by the admins for each channel. Because BYFN all exists on a single machine, it is possible for us to update each of these channels.

注意: 此过程将通过一系列配置更新事务完成,每个信道组一个。 在现实生产网络中,这些信道配置更新将由每个信道的管理员处理。 由于BYFN都存在于一台机器 上,因此更新每个通道是有可能的。

For more information on updating channel configs, click on :doc:`channel_update_tutorial` or the doc on :doc:`config_update`.

有关更新信道配置的更多信息,请单击 :doc:`channel_update_tutorial`doc on:doc:`config_update`

Get back into the cli container by reissuing docker exec -it cli bash.

通过重新执行 docker exec -it cli bash 返回 cli 容器。

Now let’s check the set environment variables with:

现在让我们检查设置的环境变量:

env|grep PEER

You’ll also need to install jq:

你还需要安装 jq

apt-get update

apt-get install -y jq
Orderer System Channel Capabilities
背书节点系统信道功能

Let’s set our environment variables for the orderer system channel. Issue each of these commands:

让我们为背书节点系统信道设置环境变量。发出以下每个命令:

CORE_PEER_LOCALMSPID="OrdererMSP"

CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/users/Admin@example.com/msp

ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

And let’s set our channel name to testchainid:

让我们将我们的信道名称设置为 testchainid

CH_NAME=testchainid
Orderer Group
背书节点组

The first step in updating a channel configuration is getting the latest config block:

更新信道配置的第一步是获取最新的配置块:

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CH_NAME  --tls --cafile $ORDERER_CA

注解

We require configtxlator v1.0.0 or higher for this next step.

注意: 我们需要configtxlator v1.0.0或更高版本才能完成下一步。

To make our config easy to edit, let’s convert the config block to JSON using configtxlator:

为了使我们的配置易于编辑,让我们使用configtxlator将配置块转换为JSON:

configtxlator proto_decode --input config_block.pb --type common.Block --output config_block.json

This command uses jq to remove the headers, metadata, and signatures from the config:

此命令使用 jq 从配置中删除标头,元数据和签名:

jq .data.data[0].payload.data.config config_block.json > config.json

Next, add capabilities to the orderer group. The following command will create a copy of the config file and add our new capabilities to it:

接下来,为背书节点组添加功能。 以下命令将创建配置文件的副本并向其添加新功能:

jq -s '.[0] * {"channel_group":{"groups":{"Orderer": {"values": {"Capabilities": .[1]}}}}}' config.json ./scripts/capabilities.json > modified_config.json

Note what we’re changing here: Capabilities are being added as a value of the orderer group under channel_group. The specific channel we’re working in is not noted in this command, but recall that it’s the orderer system channel testchainid. It should be updated first because it is this channel’s configuration that will be copied by default during the creation of any new channel.

注意我们在这里要改变的内容:功能 做为``channel_group`` 下的背书节点组的一个 被添加进来。我们正在使用的特定信道未在此命令中注明,但请记住它是名为 testchainid 的背书节点系统信道。它应该首先更新,因为正是 这个信道的配置将在创建任何其他新通道时被默认复制

Now we can create the config update:

现在我们可以创建配置更新:

configtxlator proto_encode --input config.json --type common.Config --output config.pb

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

configtxlator compute_update --channel_id $CH_NAME --original config.pb --updated modified_config.pb --output config_update.pb

Package the config update into a transaction:

将配置更新打包到事务中:

configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate --output config_update.json

echo '{"payload":{"header":{"channel_header":{"channel_id":"'$CH_NAME'", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_in_envelope.json

configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope --output config_update_in_envelope.pb

Submit the config update transaction:

提交配置更新事务:

注解

The command below both signs and submits the transaction to the ordering service.

注意: 下面的命令都是用来签署并将交易提交给背书服务的。

peer channel update -f config_update_in_envelope.pb -c $CH_NAME -o orderer.example.com:7050 --tls true --cafile $ORDERER_CA

Our config update transaction represents the difference between the original config and the modified one, but the orderer will translate this into a full channel config.

我们的配置更新事务表示原始配置和修改后的配置之间的差异,但是背书节点会将其转换为完整的信道配置。

Channel Group
信道组

Now let’s move on to enabling capabilities for the channel group at the orderer system level.

现在让我们继续在背书系统级别为信道组启用功能。

The first step, as before, is to get the latest channel configuration.

与以前一样,第一步是获取最新的信道配置。

注解

This set of commands is exactly the same as the steps from the orderer group.

注意: 这组命令与背书节点组中的步骤完全相同。

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CH_NAME --tls --cafile $ORDERER_CA

configtxlator proto_decode --input config_block.pb --type common.Block --output config_block.json

jq .data.data[0].payload.data.config config_block.json > config.json

Next, create a modified channel config:

接下来,创建一个修改后的信道配置:

jq -s '.[0] * {"channel_group":{"values": {"Capabilities": .[1]}}}' config.json ./scripts/capabilities.json > modified_config.json

Note what we’re changing here: Capabilities are being added as a value of the top level channel_group (in the testchainid channel, as before).

注意我们在这里要改变的地方:功能 作为一个顶级 channel_group值``(在``testchainid 信道中,如前所述)被添加。

Create the config update transaction:

创建配置更新事务:

注解

This set of commands is exactly the same as the third step from the orderer group.

注意: 这组命令与背书节点组的第三步完全相同。

configtxlator proto_encode --input config.json --type common.Config --output config.pb

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

configtxlator compute_update --channel_id $CH_NAME --original config.pb --updated modified_config.pb --output config_update.pb

Package the config update into a transaction:

将配置更新打包到事务中:

configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate --output config_update.json

echo '{"payload":{"header":{"channel_header":{"channel_id":"'$CH_NAME'", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_in_envelope.json

configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope --output config_update_in_envelope.pb

Submit the config update transaction:

提交配置更新事务:

peer channel update -f config_update_in_envelope.pb -c $CH_NAME -o orderer.example.com:7050 --tls true --cafile $ORDERER_CA

Enabling Capabilities on Existing Channels

在现有信道上启用功能

Set the channel name to mychannel:

将信道名称设置为 mychannel

CH_NAME=mychannel
Orderer Group
背书节点组

Get the channel config:

获得信道配置:

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CH_NAME  --tls --cafile $ORDERER_CA

configtxlator proto_decode --input config_block.pb --type common.Block --output config_block.json

jq .data.data[0].payload.data.config config_block.json > config.json

Let’s add capabilities to the orderer group. The following command will create a copy of the config file and add our new capabilities to it:

让我们为背书节点组添加功能。 以下命令将创建配置文件的副本并向其添加新功能:

jq -s '.[0] * {"channel_group":{"groups":{"Orderer": {"values": {"Capabilities": .[1]}}}}}' config.json ./scripts/capabilities.json > modified_config.json

Note what we’re changing here: Capabilities are being added as a value of the orderer group under channel_group. This is exactly what we changed before, only now we’re working with the config to the channel mychannel instead of testchainid.

注意我们在这里要改变的内容:功能 作为 channel_group 下的背书节点组的 被添加。 这正是我们之前改变的,只是现在我们正在使用的是对信道 mychannel 的配置而不是``testchainid`` 的。

Create the config update:

创建配置更新:

configtxlator proto_encode --input config.json --type common.Config --output config.pb

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

configtxlator compute_update --channel_id $CH_NAME --original config.pb --updated modified_config.pb --output config_update.pb

Package the config update into a transaction:

将配置更新打包到事务中:

configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate --output config_update.json

echo '{"payload":{"header":{"channel_header":{"channel_id":"'$CH_NAME'", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_in_envelope.json

configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope --output config_update_in_envelope.pb

Submit the config update transaction:

提交配置更新事务:

peer channel update -f config_update_in_envelope.pb -c $CH_NAME -o orderer.example.com:7050 --tls true --cafile $ORDERER_CA
Channel Group
信道组

注解

While this may seem repetitive, remember that we’re performing the same process on different groups. In a production network, as we’ve said, this process would likely be split up among the various channel admins.

注意: 虽然这看似重复,但请记住,我们在不同的组上执行相同的过程。 在生产网络中,正如我们所说,这个过程可能会在各个信道管理员之间分开。

Fetch, decode, and scope the config:

获取,解码和范围配置:

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CH_NAME --tls --cafile $ORDERER_CA

configtxlator proto_decode --input config_block.pb --type common.Block --output config_block.json

jq .data.data[0].payload.data.config config_block.json > config.json

Create a modified config:

创建修改后的配置:

jq -s '.[0] * {"channel_group":{"values": {"Capabilities": .[1]}}}' config.json ./scripts/capabilities.json > modified_config.json

Note what we’re changing here: Capabilities are being added as a value of the top level channel_group (in mychannel, as before).

注意我们在这里要改变的内容:功能 被添加为顶级 channel_group值``(在``mychannel 中,如前所述)。

Create the config update:

创建配置更新:

configtxlator proto_encode --input config.json --type common.Config --output config.pb

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

configtxlator compute_update --channel_id $CH_NAME --original config.pb --updated modified_config.pb --output config_update.pb

Package the config update into a transaction:

将配置更新打包到事务中:

configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate --output config_update.json

echo '{"payload":{"header":{"channel_header":{"channel_id":"'$CH_NAME'", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_in_envelope.json

configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope --output config_update_in_envelope.pb

Because we’re updating the config of the channel group, the relevant orgs – Org1, Org2, and the OrdererOrg – need to sign it. This task would usually be performed by the individual org admins, but in BYFN, as we’ve said, this task falls to us.

因为我们正在更新 信道组 的配置,所以相关的orgs - Org1,Org2和OrdererOrg - 需要对其进行签名。这项任务通常由个别组织管理员执行,但在BYFN中,正如我们所说,这项任务落在我们身上。

First, switch into Org1 and sign the update:

首先,切换到Org1并签署更新:

CORE_PEER_LOCALMSPID="Org1MSP"

CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp

CORE_PEER_ADDRESS=peer0.org1.example.com:7051

peer channel signconfigtx -f config_update_in_envelope.pb

And do the same as Org2:

对Org2做同样的操作:

CORE_PEER_LOCALMSPID="Org2MSP"

CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp

CORE_PEER_ADDRESS=peer0.org1.example.com:7051

peer channel signconfigtx -f config_update_in_envelope.pb

And as the OrdererOrg:

对OrdererOrg做同样操作:

CORE_PEER_LOCALMSPID="OrdererMSP"

CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/users/Admin@example.com/msp

peer channel update -f config_update_in_envelope.pb -c $CH_NAME -o orderer.example.com:7050 --tls true --cafile $ORDERER_CA
Application Group
应用组

For the application group, we will need to reset the environment variables as one organization:

对于应用程序组,我们需要将环境变量重置为一个组织:

CORE_PEER_LOCALMSPID="Org1MSP"

CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp

CORE_PEER_ADDRESS=peer0.org1.example.com:7051

Now, get the latest channel config (this process should be very familiar by now):

现在,获取最新的信道配置(此过程现在应该非常熟悉):

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CH_NAME --tls --cafile $ORDERER_CA

configtxlator proto_decode --input config_block.pb --type common.Block --output config_block.json

jq .data.data[0].payload.data.config config_block.json > config.json

Create a modified channel config:

创建修改后的信道配置:

jq -s '.[0] * {"channel_group":{"groups":{"Application": {"values": {"Capabilities": .[1]}}}}}' config.json ./scripts/capabilities.json > modified_config.json

Note what we’re changing here: Capabilities are being added as a value of the Application group under channel_group (in mychannel).

注意我们在这里要改变的内容:功能 作为 channel_group``(在 ``mychannel 中)下的应用程序组的 而被添加。

Create a config update transaction:

创建配置更新事务:

configtxlator proto_encode --input config.json --type common.Config --output config.pb

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

configtxlator compute_update --channel_id $CH_NAME --original config.pb --updated modified_config.pb --output config_update.pb

Package the config update into a transaction:

将配置更新打包到事务中:

configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate --output config_update.json

echo '{"payload":{"header":{"channel_header":{"channel_id":"'$CH_NAME'", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_in_envelope.json

configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope --output config_update_in_envelope.pb

Org1 signs the transaction:

Org1对事务进行签名:

peer channel signconfigtx -f config_update_in_envelope.pb

Set the environment variables as Org2:

将环境变量设置为Org2:

export CORE_PEER_LOCALMSPID="Org2MSP"

export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt

export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp

export CORE_PEER_ADDRESS=peer0.org2.example.com:7051

Org2 submits the config update transaction with its signature:

Org2使用其签名提交配置更新事务:

peer channel update -f config_update_in_envelope.pb -c $CH_NAME -o orderer.example.com:7050 --tls true --cafile $ORDERER_CA

Congratulations! You have now enabled capabilities on all of your channels.

恭喜! 您现在已在所有信道上启用了功能。

Verify that Capabilities are Enabled
验证功能是否已启用

But let’s test just to make sure by moving 10 from a to b, as before:

但是让我们测试只是为了确保将 10a 移动到 b,如前所述:

peer chaincode invoke -o orderer.example.com:7050  --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem  -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'

And then querying the value of a, which should reveal a value of 70. Let’s see:

然后查询 a 的值,它应该显示 70 的值。让我们看看:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

We should see the following:

我们应该看到以下内容:

Query Result: 70

In which case we have successfully added capabilities to all of our channels.

在这种情况下,我们已成功为所有信道添加功能。

注解

Although all peer binaries in the network should have been upgraded prior to this point, enabling capability requirements on a channel which a v1.0.0 peer is joined to will result in a crash of the peer. This crashing behavior is deliberate because it indicates a misconfiguration which might result in a state fork.

注意: 虽然网络中的所有节点二进制文件都应该在此之前进行升级,但是在一个v1.0.0节点加入的信道上启用功能的要求将导致节点崩溃。这种崩溃行为是故意的,因为它表明可能导致状态分叉的配置错误。

Upgrading Components BYFN Does Not Support

升级BYFN不支持的组件

Although this is the end of our update tutorial, there are other components that exist in production networks that are not supported by the BYFN sample. In this section, we’ll talk through the process of updating them.

虽然这是我们的更新教程的结束,但生产网络中还存在BYFN示例不支持的其他组件。在本节中,我们将讨论更新它们的过程。

Fabric CA Container
Fabric CA 容器

To learn how to upgrade your Fabric CA server, click over to the CA documentation.

了解如何升级Fabric CA服务器,请单击 CA文档.

Upgrade Node SDK Clients
升级节点SDK客户端

注解

Upgrade Fabric CA before upgrading Node SDK Clients.

注意: 升级Node SDK客户端之前升级Fabric CA.

Use NPM to upgrade any Node.js client by executing these commands in the root directory of your application:

使用NPM通过在应用程序的根目录中执行以下命令来升级任何 Node.js 客户端:

npm install fabric-client@1.1

npm install fabric-ca-client@1.1

These commands install the new version of both the Fabric client and Fabric-CA client and write the new versions package.json.

这些命令安装了Fabric客户端和Fabric-CA客户端的新版本,并编写新版本 package.json

Upgrading the Kafka Cluster
升级Kafka集群

It is not required, but it is recommended that the Kafka cluster be upgraded and kept up to date along with the rest of Fabric. Newer versions of Kafka support older protocol versions, so you may upgrade Kafka before or after the rest of Fabric.

这不是必需的,但建议升级Kafka集群并与Fabric的其余部分保持同步。 较新版本的Kafka支持较旧的协议版本,因此您可以在Fabric的其余部分之前或之后升级Kafka。

If your Kafka cluster is older than Kafka v0.11.0, this upgrade is especially recommended as it hardens replication in order to better handle crash faults.

如果您的Kafka集群早于Kafka v0.11.0,则特别推荐此升级,因为它会加强复制以便更好地处理崩溃故障。

Refer to the official Apache Kafka documentation on upgrading Kafka from previous versions to upgrade the Kafka cluster brokers.

有关 ` 从先前版本升级Kafka`__ 以升级Kafka集群代理的信息 .. __: https://kafka.apache.org/documentation/#upgrade,请参阅官方Apache Kafka文档。

Please note that the Kafka cluster might experience a negative performance impact if the orderer is configured to use a Kafka protocol version that is older than the Kafka broker version. The Kafka protocol version is set using either the Kafka.Version key in the orderer.yaml file or via the ORDERER_KAFKA_VERSION environment variable in a Docker deployment. Fabric v1.0 provided sample Kafka docker images containing Kafka version 0.9.0.1. Fabric v1.1 provides sample Kafka docker images containing Kafka version v1.0.0.

请注意,如果将背书节点配置为使用早于Kafka代理版本的Kafka协议版本,则Kafka集群可能会对性能产生负面影响。 使用 orderer.yaml 文件中的Kafka.Versionkey或Docker部署中的 ORDERER_KAFKA_VERSION 环境变量设置Kafka协议版本。 Fabric v1.0提供了包含Kafka0.9.0.1版本的示例的Kafka docker镜像。 Fabric v1.1提供了包含Kafkav1.0.0版本的示例的Kafka docker镜像。

注解

You must configure the Kafka protocol version used by the orderer to match your Kafka cluster version, even if it was not set before. For example, if you are using the sample Kafka images provided with Fabric v1.0.x, either set the ORDERER_KAFKA_VERSION environment variable, or the Kafka.Version key in the orderer.yaml to 0.9.0.1. If you are unsure about your Kafka cluster version, you can configure the orderer’s Kafka protocol version to 0.9.0.1 for maximum compatibility and update the setting afterwards when you have determined your Kafka cluster version.

注意: 您必须配置背书节点使用的Kafka协议版本以匹配您的Kafka集群版本,即使之前未设置它。例如,如果您使用Fabric v1.0.x提供的示例Kafka映像,请将 ORDERER_KAFKA_VERSION 环境变量或 orderer.yaml 中的 Kafka.Version 键设置为 0.9.0.1 。如果您不确定Kafka集群版本,可以将背书节点的Kafka协议版本配置为
0.9.0.1 以获得最大兼容性 ,并在确定Kafka集群版本后更新设置。
Upgrading Zookeeper
升级Zookeeper

An Apache Kafka cluster requires an Apache Zookeeper cluster. The Zookeeper API has been stable for a long time and, as such, almost any version of Zookeeper is tolerated by Kafka. Refer to the `Apache Kafka upgrade`_ documentation in case there is a specific requirement to upgrade to a specific version of Zookeeper. If you would like to upgrade your Zookeeper cluster, some information on upgrading Zookeeper cluster can be found in the Zookeeper FAQ.

Apache Kafka集群需要Apache Zookeeper集群。 Zookeeper API已经稳定了很长时间,因此,Kafka几乎可以容忍任何版本的Zookeeper。 如果有特定要求升级到特定版本的Zookeeper,请参阅Apache Kafka升级文档。 如果您想升级Zookeeper集群,可以在 Zookeeper FAQ 中找到有关升级Zookeeper集群的一些信息。 .. _Apache Kafka upgrade: https://kafka.apache.org/documentation/#upgrade .. _Zookeeper FAQ: https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ

Upgrading CouchDB
升级 CouchDB

If you are using CouchDB as state database, upgrade the peer’s CouchDB at the same time the peer is being upgraded. To upgrade CouchDB:

如果您使用CouchDB作为状态数据库,请在升级节点的同时升级节点的CouchDB。 要升级CouchDB:

  1. Stop CouchDB.
  2. Backup CouchDB data directory.
  3. Delete CouchDB data directory.
  4. Install CouchDB v2.1.1 binaries or update deployment scripts to use a new Docker image (CouchDB v2.1.1 pre-configured Docker image is provided alongside Fabric v1.1).
  5. Restart CouchDB.
  1. 停止CouchDB。
  2. 备份CouchDB数据目录。
  3. 删除CouchDB数据目录。
  4. 安装CouchDB v2.1.1二进制文件或更新部署脚本以使用新的Docker镜像(CouchDB v2.1.1预配置的Docker镜像与Fabric v1.1一起提供)。
  5. 重启CouchDB

The reason to delete the CouchDB data directory is that upon startup the v1.1 peer will rebuild the CouchDB state databases from the blockchain transactions. Starting in v1.1, there will be an internal CouchDB database for each channel_chaincode combination (for each chaincode instantiated on each channel that the peer has joined).

删除CouchDB数据目录的原因是,在启动时,v1.1 节点将从区块链事务重建CouchDB状态数据库。 从v1.1开始,每个 channel_chaincodecombination 都会有一个内部CouchDB数据库(对于已经加入的节点的每个信道上实例化的每个链代码)。

Upgrade Chaincodes With Vendored Shim
使用Vendored Shim升级Chaincodes

A number of third party tools exist that will allow you to vendor a chaincode shim. If you used one of these tools, use the same one to update your vendoring and re-package your chaincode.

存在许多第三方工具,允许您提供chaincode shim。 如果您使用其中一种工具,请使用相同的工具更新您的Vendore并重新打包您的链码。

If your chaincode vendors the shim, after updating the shim version, you must install it to all peers which already have the chaincode. Install it with the same name, but a newer version. Then you should execute a chaincode upgrade on each channel where this chaincode has been deployed to move to the new version.

如果你的chaincode vendor是shim,在更新shim版本之后,你必须将它安装到已经拥有链码的所有节点中。 使用相同的名称安装它,但是更新版本。 然后,您应该在已部署此链代码的每个信道上执行链代码升级,以转移到新版本。

If you did not vendor your chaincode, you can skip this step entirely.

如果您没有提供链码,则可以完全跳过此步骤。

Chaincode Tutorials 链码指南

What is Chaincode? 什么是链码?

Chaincode is a program, written in Go, node.js, and eventually in other programming languages such as Java, that implements a prescribed interface. Chaincode runs in a secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages ledger state through transactions submitted by applications.

链码是一个程序,使用 Go, node.js 来编写, 后续将可使用其他的编程语言例如Java,来实现规定的接口。 链码运行在一个安全的Docker容器中独立于背书节点。 通过app提交交易,链码可以初始化和管理ledger state(账本状态)。

A chaincode typically handles business logic agreed to by members of the network, so it may be considered as a “smart contract”. State created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly by another chaincode. However, within the same network, given the appropriate permission a chaincode may invoke another chaincode to access its state.

链码通常处理网络中的成员认可的业务逻辑,所以它可以被视为一种”智能合约”。 如果创建某个账本状态时是通过某一个链码执行的,那么对这个状态的后续操作只能是限定在 创建它的这个链码范围内,而不能被其他链码直接操作。 不过,在同一个网络中,给定链码相应的权限,该链码可以调用另一个链码来访问其账本状态。

Two Personas 两个视角

We offer two different perspectives on chaincode. One, from the perspective of an application developer developing a blockchain application/solution entitled Chaincode for Developers 面向开发人员的链码指南, and the other, Chaincode for Operators oriented to the blockchain network operator who is responsible for managing a blockchain network, and who would leverage the Hyperledger Fabric API to install, instantiate, and upgrade chaincode, but would likely not be involved in the development of a chaincode application.

对于链码我们提供两种不同的视角。第一, 从应用程序开发人员的角度来看,如何开发区块链应用程序/ 解决方案, 教程标题为 Chaincode for Developers 面向开发人员的链码指南 。第二, Chaincode for Operators 面向负责管理运 营区块链网络的人员 - 负责管理区块链网络,利用 Hyperledger Fabric API 进行安装、实例化、链码升级, 但可能不参与链码应用程序的开发。

https://creativecommons.org/licenses/by/4.0/

Chaincode for Developers 面向开发人员的链码指南

What is Chaincode? 什么是链码?

Chaincode is a program, written in Go, node.js, that implements a prescribed interface. Eventually, other programming languages such as Java, will be supported. Chaincode runs in a secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages the ledger state through transactions submitted by applications.

链码是一个程序,使用 Go, node.js 来实现规定的接口, 后续也将会支持其他的编程语言例如Java。 链码运行在一个安全的Docker容器中独立于背书节点。 通过app提交交易, 链码可以初始化和管理ledger state(账本状态)。

A chaincode typically handles business logic agreed to by members of the network, so it similar to a “smart contract”. A chaincode can be invoked to update or query the ledger in a proposal transaction. Given the appropriate permission, a chaincode may invoke another chaincode, either in the same channel or in different channels, to access its state. Note that, if the called chaincode is on a different channel from the calling chaincode, only read query is allowed. That is, the called chaincode on a different channel is only a Query, which does not participate in state validation checks in subsequent commit phase.

链码通常处理网络中的成员认可的业务逻辑,所以它可以被视为一种”智能合约”。 在交易提案中可以通过调用链码的方式来更新或者查询账本。 给定链码相应的权限,该链码可以调用另一个链码来访问其账本状态,不论被调用的链码是否在同一个通道中。 请注意,如果被调用的链码和调用链码不在同一个通道内,那该调用处理里只有读操作是有效的。 这意味着,调用不同通道上的链码仅仅只是”查询”操作,在后续的验证阶段并不会将这部分调用加入世界状态有效性检查中。

In the following sections, we will explore chaincode through the eyes of an application developer. We’ll present a simple chaincode sample application and walk through the purpose of each method in the Chaincode Shim API.

在后续的章节中,我们将会从开发者的视角去探索链码。 我们将会介绍一个简单的链码例子,并逐一解释链码Shim API中的每个方法。

Chaincode API

Every chaincode program must implement the Chaincode interface:

whose methods are called in response to received transactions. In particular the Init method is called when a chaincode receives an instantiate or upgrade transaction so that the chaincode may perform any necessary initialization, including initialization of application state. The Invoke method is called in response to receiving an invoke transaction to process transaction proposals.

The other interface in the chaincode “shim” APIs is the ChaincodeStubInterface:

which is used to access and modify the ledger, and to make invocations between chaincodes.

In this tutorial, we will demonstrate the use of these APIs by implementing a simple chaincode application that manages simple “assets”.

Simple Asset Chaincode

Our application is a basic sample chaincode to create assets (key-value pairs) on the ledger.

Choosing a Location for the Code

If you haven’t been doing programming in Go, you may want to make sure that you have Go 编程语言 installed and your system properly configured.

Now, you will want to create a directory for your chaincode application as a child directory of $GOPATH/src/.

To keep things simple, let’s use the following command:

mkdir -p $GOPATH/src/sacc && cd $GOPATH/src/sacc

Now, let’s create the source file that we’ll fill in with code:

touch sacc.go
Housekeeping

First, let’s start with some housekeeping. As with every chaincode, it implements the Chaincode interface in particular, Init and Invoke functions. So, let’s add the go import statements for the necessary dependencies for our chaincode. We’ll import the chaincode shim package and the peer protobuf package. Next, let’s add a struct SimpleAsset as a receiver for Chaincode shim functions.

package main

import (
    "fmt"

    "github.com/hyperledger/fabric/core/chaincode/shim"
    "github.com/hyperledger/fabric/protos/peer"
)

// SimpleAsset implements a simple chaincode to manage an asset
type SimpleAsset struct {
}
Initializing the Chaincode

Next, we’ll implement the Init function.

// Init is called during chaincode instantiation to initialize any data.
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {

}

注解

Note that chaincode upgrade also calls this function. When writing a chaincode that will upgrade an existing one, make sure to modify the Init function appropriately. In particular, provide an empty “Init” method if there’s no “migration” or nothing to be initialized as part of the upgrade.

Next, we’ll retrieve the arguments to the Init call using the ChaincodeStubInterface.GetStringArgs function and check for validity. In our case, we are expecting a key-value pair.

// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
  // Get the args from the transaction proposal
  args := stub.GetStringArgs()
  if len(args) != 2 {
    return shim.Error("Incorrect arguments. Expecting a key and a value")
  }
}

Next, now that we have established that the call is valid, we’ll store the initial state in the ledger. To do this, we will call ChaincodeStubInterface.PutState with the key and value passed in as the arguments. Assuming all went well, return a peer.Response object that indicates the initialization was a success.

// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
  // Get the args from the transaction proposal
  args := stub.GetStringArgs()
  if len(args) != 2 {
    return shim.Error("Incorrect arguments. Expecting a key and a value")
  }

  // Set up any variables or assets here by calling stub.PutState()

  // We store the key and the value on the ledger
  err := stub.PutState(args[0], []byte(args[1]))
  if err != nil {
    return shim.Error(fmt.Sprintf("Failed to create asset: %s", args[0]))
  }
  return shim.Success(nil)
}
Invoking the Chaincode

First, let’s add the Invoke function’s signature.

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The 'set'
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {

}

As with the Init function above, we need to extract the arguments from the ChaincodeStubInterface. The Invoke function’s arguments will be the name of the chaincode application function to invoke. In our case, our application will simply have two functions: set and get, that allow the value of an asset to be set or its current state to be retrieved. We first call ChaincodeStubInterface.GetFunctionAndParameters to extract the function name and the parameters to that chaincode application function.

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
    // Extract the function and args from the transaction proposal
    fn, args := stub.GetFunctionAndParameters()

}

Next, we’ll validate the function name as being either set or get, and invoke those chaincode application functions, returning an appropriate response via the shim.Success or shim.Error functions that will serialize the response into a gRPC protobuf message.

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
    // Extract the function and args from the transaction proposal
    fn, args := stub.GetFunctionAndParameters()

    var result string
    var err error
    if fn == "set" {
            result, err = set(stub, args)
    } else {
            result, err = get(stub, args)
    }
    if err != nil {
            return shim.Error(err.Error())
    }

    // Return the result as success payload
    return shim.Success([]byte(result))
}
Implementing the Chaincode Application

As noted, our chaincode application implements two functions that can be invoked via the Invoke function. Let’s implement those functions now. Note that as we mentioned above, to access the ledger’s state, we will leverage the ChaincodeStubInterface.PutState and ChaincodeStubInterface.GetState functions of the chaincode shim API.

// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 2 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
    }

    err := stub.PutState(args[0], []byte(args[1]))
    if err != nil {
            return "", fmt.Errorf("Failed to set asset: %s", args[0])
    }
    return args[1], nil
}

// Get returns the value of the specified asset key
func get(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 1 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key")
    }

    value, err := stub.GetState(args[0])
    if err != nil {
            return "", fmt.Errorf("Failed to get asset: %s with error: %s", args[0], err)
    }
    if value == nil {
            return "", fmt.Errorf("Asset not found: %s", args[0])
    }
    return string(value), nil
}
Pulling it All Together

Finally, we need to add the main function, which will call the shim.Start function. Here’s the whole chaincode program source.

package main

import (
    "fmt"

    "github.com/hyperledger/fabric/core/chaincode/shim"
    "github.com/hyperledger/fabric/protos/peer"
)

// SimpleAsset implements a simple chaincode to manage an asset
type SimpleAsset struct {
}

// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data.
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
    // Get the args from the transaction proposal
    args := stub.GetStringArgs()
    if len(args) != 2 {
            return shim.Error("Incorrect arguments. Expecting a key and a value")
    }

    // Set up any variables or assets here by calling stub.PutState()

    // We store the key and the value on the ledger
    err := stub.PutState(args[0], []byte(args[1]))
    if err != nil {
            return shim.Error(fmt.Sprintf("Failed to create asset: %s", args[0]))
    }
    return shim.Success(nil)
}

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
    // Extract the function and args from the transaction proposal
    fn, args := stub.GetFunctionAndParameters()

    var result string
    var err error
    if fn == "set" {
            result, err = set(stub, args)
    } else { // assume 'get' even if fn is nil
            result, err = get(stub, args)
    }
    if err != nil {
            return shim.Error(err.Error())
    }

    // Return the result as success payload
    return shim.Success([]byte(result))
}

// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 2 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
    }

    err := stub.PutState(args[0], []byte(args[1]))
    if err != nil {
            return "", fmt.Errorf("Failed to set asset: %s", args[0])
    }
    return args[1], nil
}

// Get returns the value of the specified asset key
func get(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 1 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key")
    }

    value, err := stub.GetState(args[0])
    if err != nil {
            return "", fmt.Errorf("Failed to get asset: %s with error: %s", args[0], err)
    }
    if value == nil {
            return "", fmt.Errorf("Asset not found: %s", args[0])
    }
    return string(value), nil
}

// main function starts up the chaincode in the container during instantiate
func main() {
    if err := shim.Start(new(SimpleAsset)); err != nil {
            fmt.Printf("Error starting SimpleAsset chaincode: %s", err)
    }
}
Building Chaincode

Now let’s compile your chaincode.

go get -u --tags nopkcs11 github.com/hyperledger/fabric/core/chaincode/shim
go build --tags nopkcs11

Assuming there are no errors, now we can proceed to the next step, testing your chaincode.

Testing Using dev mode

Normally chaincodes are started and maintained by peer. However in “dev mode”, chaincode is built and started by the user. This mode is useful during chaincode development phase for rapid code/build/run/debug cycle turnaround.

We start “dev mode” by leveraging pre-generated orderer and channel artifacts for a sample dev network. As such, the user can immediately jump into the process of compiling chaincode and driving calls.

Install Hyperledger Fabric Samples

If you haven’t already done so, please install the Hyperledger Fabric 示例.

Navigate to the chaincode-docker-devmode directory of the fabric-samples clone:

cd chaincode-docker-devmode

Download Docker images

We need four Docker images in order for “dev mode” to run against the supplied docker compose script. If you installed the fabric-samples repo clone and followed the instructions to 下载特定平台的二进制文件, then you should have the necessary Docker images installed locally.

注解

If you choose to manually pull the images then you must retag them as latest.

Issue a docker images command to reveal your local Docker Registry. You should see something similar to following:

docker images
REPOSITORY                     TAG                                  IMAGE ID            CREATED             SIZE
hyperledger/fabric-tools       latest                               e09f38f8928d        4 hours ago         1.32 GB
hyperledger/fabric-tools       x86_64-1.0.0                         e09f38f8928d        4 hours ago         1.32 GB
hyperledger/fabric-orderer     latest                               0df93ba35a25        4 hours ago         179 MB
hyperledger/fabric-orderer     x86_64-1.0.0                         0df93ba35a25        4 hours ago         179 MB
hyperledger/fabric-peer        latest                               533aec3f5a01        4 hours ago         182 MB
hyperledger/fabric-peer        x86_64-1.0.0                         533aec3f5a01        4 hours ago         182 MB
hyperledger/fabric-ccenv       latest                               4b70698a71d3        4 hours ago         1.29 GB
hyperledger/fabric-ccenv       x86_64-1.0.0                         4b70698a71d3        4 hours ago         1.29 GB

注解

If you retrieved the images through the 下载特定平台的二进制文件, then you will see additional images listed. However, we are only concerned with these four.

Now open three terminals and navigate to your chaincode-docker-devmode directory in each.

Terminal 1 - Start the network

docker-compose -f docker-compose-simple.yaml up

The above starts the network with the SingleSampleMSPSolo orderer profile and launches the peer in “dev mode”. It also launches two additional containers - one for the chaincode environment and a CLI to interact with the chaincode. The commands for create and join channel are embedded in the CLI container, so we can jump immediately to the chaincode calls.

Terminal 2 - Build & start the chaincode

docker exec -it chaincode bash

You should see the following:

root@d2629980e76b:/opt/gopath/src/chaincode#

Now, compile your chaincode:

cd sacc
go build

Now run the chaincode:

CORE_PEER_ADDRESS=peer:7052 CORE_CHAINCODE_ID_NAME=mycc:0 ./sacc

The chaincode is started with peer and chaincode logs indicating successful registration with the peer. Note that at this stage the chaincode is not associated with any channel. This is done in subsequent steps using the instantiate command.

Terminal 3 - Use the chaincode

Even though you are in --peer-chaincodedev mode, you still have to install the chaincode so the life-cycle system chaincode can go through its checks normally. This requirement may be removed in future when in --peer-chaincodedev mode.

We’ll leverage the CLI container to drive these calls.

docker exec -it cli bash
peer chaincode install -p chaincodedev/chaincode/sacc -n mycc -v 0
peer chaincode instantiate -n mycc -v 0 -c '{"Args":["a","10"]}' -C myc

Now issue an invoke to change the value of “a” to “20”.

peer chaincode invoke -n mycc -c '{"Args":["set", "a", "20"]}' -C myc

Finally, query a. We should see a value of 20.

peer chaincode query -n mycc -c '{"Args":["query","a"]}' -C myc

Testing new chaincode

By default, we mount only sacc. However, you can easily test different chaincodes by adding them to the chaincode subdirectory and relaunching your network. At this point they will be accessible in your chaincode container.

Chaincode encryption

In certain scenarios, it may be useful to encrypt values associated with a key in their entirety or simply in part. For example, if a person’s social security number or address was being written to the ledger, then you likely would not want this data to appear in plaintext. Chaincode encryption is achieved by leveraging the entities extension which is a BCCSP wrapper with commodity factories and functions to perform cryptographic operations such as encryption and elliptic curve digital signatures. For example, to encrypt, the invoker of a chaincode passes in a cryptographic key via the transient field. The same key may then be used for subsequent query operations, allowing for proper decryption of the encrypted state values.

For more information and samples, see the Encc Example within the fabric/examples directory. Pay specific attention to the utils.go helper program. This utility loads the chaincode shim APIs and Entities extension and builds a new class of functions (e.g. encryptAndPutState & getStateAndDecrypt) that the sample encryption chaincode then leverages. As such, the chaincode can now marry the basic shim APIs of Get and Put with the added functionality of Encrypt and Decrypt.

Managing external dependencies for chaincode written in Go

If your chaincode requires packages not provided by the Go standard library, you will need to include those packages with your chaincode. There are many tools available for managing (or “vendoring”) these dependencies. The following demonstrates how to use govendor:

govendor init
govendor add +external  // Add all external package, or
govendor add github.com/external/pkg // Add specific external package

This imports the external dependencies into a local vendor directory. peer chaincode package and peer chaincode install operations will then include code associated with the dependencies into the chaincode package.

Chaincode for Operators

What is Chaincode?

Chaincode is a program, written in Go, and eventually in other programming languages such as Java, that implements a prescribed interface. Chaincode runs in a secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages ledger state through transactions submitted by applications.

A chaincode typically handles business logic agreed to by members of the network, so it may be considered as a “smart contract”. State created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly by another chaincode. However, within the same network, given the appropriate permission a chaincode may invoke another chaincode to access its state.

In the following sections, we will explore chaincode through the eyes of a blockchain network operator, Noah. For Noah’s interests, we will focus on chaincode lifecycle operations; the process of packaging, installing, instantiating and upgrading the chaincode as a function of the chaincode’s operational lifecycle within a blockchain network.

Chaincode lifecycle

The Hyperledger Fabric API enables interaction with the various nodes in a blockchain network - the peers, orderers and MSPs - and it also allows one to package, install, instantiate and upgrade chaincode on the endorsing peer nodes. The Hyperledger Fabric language-specific SDKs abstract the specifics of the Hyperledger Fabric API to facilitate application development, though it can be used to manage a chaincode’s lifecycle. Additionally, the Hyperledger Fabric API can be accessed directly via the CLI, which we will use in this document.

We provide four commands to manage a chaincode’s lifecycle: package, install, instantiate, and upgrade. In a future release, we are considering adding stop and start transactions to disable and re-enable a chaincode without having to actually uninstall it. After a chaincode has been successfully installed and instantiated, the chaincode is active (running) and can process transactions via the invoke transaction. A chaincode may be upgraded any time after it has been installed.

Packaging

The chaincode package consists of 3 parts:

  • the chaincode, as defined by ChaincodeDeploymentSpec or CDS. The CDS defines the chaincode package in terms of the code and other properties such as name and version,
  • an optional instantiation policy which can be syntactically described by the same policy used for endorsement and described in Endorsement policies, and
  • a set of signatures by the entities that “own” the chaincode.

The signatures serve the following purposes:

  • to establish an ownership of the chaincode,
  • to allow verification of the contents of the package, and
  • to allow detection of package tampering.

The creator of the instantiation transaction of the chaincode on a channel is validated against the instantiation policy of the chaincode.

Creating the package

There are two approaches to packaging chaincode. One for when you want to have multiple owners of a chaincode, and hence need to have the chaincode package signed by multiple identities. This workflow requires that we initially create a signed chaincode package (a SignedCDS) which is subsequently passed serially to each of the other owners for signing.

The simpler workflow is for when you are deploying a SignedCDS that has only the signature of the identity of the node that is issuing the install transaction.

We will address the more complex case first. However, you may skip ahead to the Install - 安装 section below if you do not need to worry about multiple owners just yet.

To create a signed chaincode package, use the following command:

peer chaincode package -n mycc -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -v 0 -s -S -i "AND('OrgA.admin')" ccpack.out

The -s option creates a package that can be signed by multiple owners as opposed to simply creating a raw CDS. When -s is specified, the -S option must also be specified if other owners are going to need to sign. Otherwise, the process will create a SignedCDS that includes only the instantiation policy in addition to the CDS.

The -S option directs the process to sign the package using the MSP identified by the value of the localMspid property in core.yaml.

The -S option is optional. However if a package is created without a signature, it cannot be signed by any other owner using the signpackage command.

The optional -i option allows one to specify an instantiation policy for the chaincode. The instantiation policy has the same format as an endorsement policy and specifies which identities can instantiate the chaincode. In the example above, only the admin of OrgA is allowed to instantiate the chaincode. If no policy is provided, the default policy is used, which only allows the admin identity of the peer’s MSP to instantiate chaincode.

Package signing

A chaincode package that was signed at creation can be handed over to other owners for inspection and signing. The workflow supports out-of-band signing of chaincode package.

The ChaincodeDeploymentSpec may be optionally be signed by the collective owners to create a SignedChaincodeDeploymentSpec (or SignedCDS). The SignedCDS contains 3 elements:

  1. The CDS contains the source code, the name, and version of the chaincode.
  2. An instantiation policy of the chaincode, expressed as endorsement policies.
  3. The list of chaincode owners, defined by means of Endorsement.

注解

Note that this endorsement policy is determined out-of-band to provide proper MSP principals when the chaincode is instantiated on some channels. If the instantiation policy is not specified, the default policy is any MSP administrator of the channel.

Each owner endorses the ChaincodeDeploymentSpec by combining it with that owner’s identity (e.g. certificate) and signing the combined result.

A chaincode owner can sign a previously created signed package using the following command:

peer chaincode signpackage ccpack.out signedccpack.out

Where ccpack.out and signedccpack.out are the input and output packages, respectively. signedccpack.out contains an additional signature over the package signed using the Local MSP.

Installing chaincode

The install transaction packages a chaincode’s source code into a prescribed format called a ChaincodeDeploymentSpec (or CDS) and installs it on a peer node that will run that chaincode.

注解

You must install the chaincode on each endorsing peer node of a channel that will run your chaincode.

When the install API is given simply a ChaincodeDeploymentSpec, it will default the instantiation policy and include an empty owner list.

注解

Chaincode should only be installed on endorsing peer nodes of the owning members of the chaincode to protect the confidentiality of the chaincode logic from other members on the network. Those members without the chaincode, can’t be the endorsers of the chaincode’s transactions; that is, they can’t execute the chaincode. However, they can still validate and commit the transactions to the ledger.

To install a chaincode, send a SignedProposal to the lifecycle system chaincode (LSCC) described in the System Chaincode section. For example, to install the sacc sample chaincode described in section Simple Asset Chaincode using the CLI, the command would look like the following:

peer chaincode install -n asset_mgmt -v 1.0 -p sacc

The CLI internally creates the SignedChaincodeDeploymentSpec for sacc and sends it to the local peer, which calls the Install method on the LSCC. The argument to the -p option specifies the path to the chaincode, which must be located within the source tree of the user’s GOPATH, e.g. $GOPATH/src/sacc. See the CLI section for a complete description of the command options.

Note that in order to install on a peer, the signature of the SignedProposal must be from 1 of the peer’s local MSP administrators.

Instantiate

The instantiate transaction invokes the lifecycle System Chaincode (LSCC) to create and initialize a chaincode on a channel. This is a chaincode-channel binding process: a chaincode may be bound to any number of channels and operate on each channel individually and independently. In other words, regardless of how many other channels on which a chaincode might be installed and instantiated, state is kept isolated to the channel to which a transaction is submitted.

The creator of an instantiate transaction must satisfy the instantiation policy of the chaincode included in SignedCDS and must also be a writer on the channel, which is configured as part of the channel creation. This is important for the security of the channel to prevent rogue entities from deploying chaincodes or tricking members to execute chaincodes on an unbound channel.

For example, recall that the default instantiation policy is any channel MSP administrator, so the creator of a chaincode instantiate transaction must be a member of the channel administrators. When the transaction proposal arrives at the endorser, it verifies the creator’s signature against the instantiation policy. This is done again during the transaction validation before committing it to the ledger.

The instantiate transaction also sets up the endorsement policy for that chaincode on the channel. The endorsement policy describes the attestation requirements for the transaction result to be accepted by members of the channel.

For example, using the CLI to instantiate the sacc chaincode and initialize the state with john and 0, the command would look like the following:

peer chaincode instantiate -n sacc -v 1.0 -c '{"Args":["john","0"]}' -P "OR ('Org1.member','Org2.member')"

注解

Note the endorsement policy (CLI uses polish notation), which requires an endorsement from either member of Org1 or Org2 for all transactions to sacc. That is, either Org1 or Org2 must sign the result of executing the Invoke on sacc for the transactions to be valid.

After being successfully instantiated, the chaincode enters the active state on the channel and is ready to process any transaction proposals of type ENDORSER_TRANSACTION. The transactions are processed concurrently as they arrive at the endorsing peer.

Upgrade

A chaincode may be upgraded any time by changing its version, which is part of the SignedCDS. Other parts, such as owners and instantiation policy are optional. However, the chaincode name must be the same; otherwise it would be considered as a totally different chaincode.

Prior to upgrade, the new version of the chaincode must be installed on the required endorsers. Upgrade is a transaction similar to the instantiate transaction, which binds the new version of the chaincode to the channel. Other channels bound to the old version of the chaincode still run with the old version. In other words, the upgrade transaction only affects one channel at a time, the channel to which the transaction is submitted.

注解

Note that since multiple versions of a chaincode may be active simultaneously, the upgrade process doesn’t automatically remove the old versions, so user must manage this for the time being.

There’s one subtle difference with the instantiate transaction: the upgrade transaction is checked against the current chaincode instantiation policy, not the new policy (if specified). This is to ensure that only existing members specified in the current instantiation policy may upgrade the chaincode.

注解

Note that during upgrade, the chaincode Init function is called to perform any data related updates or re-initialize it, so care must be taken to avoid resetting states when upgrading chaincode.

Stop and Start

Note that stop and start lifecycle transactions have not yet been implemented. However, you may stop a chaincode manually by removing the chaincode container and the SignedCDS package from each of the endorsers. This is done by deleting the chaincode’s container on each of the hosts or virtual machines on which the endorsing peer nodes are running, and then deleting the SignedCDS from each of the endorsing peer nodes:

注解

TODO - in order to delete the CDS from the peer node, you would need to enter the peer node’s container, first. We really need to provide a utility script that can do this.

docker rm -f <container id>
rm /var/hyperledger/production/chaincodes/<ccname>:<ccversion>

Stop would be useful in the workflow for doing upgrade in controlled manner, where a chaincode can be stopped on a channel on all peers before issuing an upgrade.

CLI

注解

We are assessing the need to distribute platform-specific binaries for the Hyperledger Fabric peer binary. For the time being, you can simply invoke the commands from within a running docker container.

To view the currently available CLI commands, execute the following command from within a running fabric-peer Docker container:

docker run -it hyperledger/fabric-peer bash
# peer chaincode --help

Which shows output similar to the example below:

Usage:
  peer chaincode [command]

Available Commands:
  install     Package the specified chaincode into a deployment spec and save it on the peer's path.
  instantiate Deploy the specified chaincode to the network.
  invoke      Invoke the specified chaincode.
  list        Get the instantiated chaincodes on a channel or installed chaincodes on a peer.
  package     Package the specified chaincode into a deployment spec.
  query       Query using the specified chaincode.
  signpackage Sign the specified chaincode package
  upgrade     Upgrade chaincode.

Flags:
      --cafile string      Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
  -h, --help               help for chaincode
  -o, --orderer string     Ordering service endpoint
      --tls                Use TLS when communicating with the orderer endpoint
      --transient string   Transient map of arguments in JSON encoding

Global Flags:
      --logging-level string       Default logging level and overrides, see core.yaml for full syntax
      --test.coverprofile string   Done (default "coverage.cov")
  -v, --version

Use "peer chaincode [command] --help" for more information about a command.

To facilitate its use in scripted applications, the peer command always produces a non-zero return code in the event of command failure.

Example of chaincode commands:

peer chaincode install -n mycc -v 0 -p path/to/my/chaincode/v0
peer chaincode instantiate -n mycc -v 0 -c '{"Args":["a", "b", "c"]}' -C mychannel
peer chaincode install -n mycc -v 1 -p path/to/my/chaincode/v1
peer chaincode upgrade -n mycc -v 1 -c '{"Args":["d", "e", "f"]}' -C mychannel
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","e"]}'
peer chaincode invoke -o orderer.example.com:7050  --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'

System chaincode

System chaincode has the same programming model except that it runs within the peer process rather than in an isolated container like normal chaincode. Therefore, system chaincode is built into the peer executable and doesn’t follow the same lifecycle described above. In particular, install, instantiate and upgrade do not apply to system chaincodes.

The purpose of system chaincode is to shortcut gRPC communication cost between peer and chaincode, and tradeoff the flexibility in management. For example, a system chaincode can only be upgraded with the peer binary. It must also register with a fixed set of parameters compiled in and doesn’t have endorsement policies or endorsement policy functionality.

System chaincode is used in Hyperledger Fabric to implement a number of system behaviors so that they can be replaced or modified as appropriate by a system integrator.

The current list of system chaincodes:

  1. LSCC Lifecycle system chaincode handles lifecycle requests described above.
  2. CSCC Configuration system chaincode handles channel configuration on the peer side.
  3. QSCC Query system chaincode provides ledger query APIs such as getting blocks and transactions.
  4. ESCC Endorsement system chaincode handles endorsement by signing the transaction proposal response.
  5. VSCC Validation system chaincode handles the transaction validation, including checking endorsement policy and multiversioning concurrency control.

Care must be taken when modifying or replacing these system chaincodes, especially LSCC, ESCC and VSCC since they are in the main transaction execution path. It is worth noting that as VSCC validates a block before committing it to the ledger, it is important that all peers in the channel compute the same validation to avoid ledger divergence (non-determinism). So special care is needed if VSCC is modified or replaced.

System Chaincode Plugins

System chaincodes are specialized chaincodes that run as part of the peer process as opposed to user chaincodes that run in separate docker containers. As such they have more access to resources in the peer and can be used for implementing features that are difficult or impossible to be implemented through user chaincodes. Examples of System Chaincodes are ESCC (Endorser System Chaincode) for endorsing proposals, QSCC (Query System Chaincode) for ledger and other fabric related queries and VSCC (Validation System Chaincode) for validating a transaction at commit time respectively.

Unlike a user chaincode, a system chaincode is not installed and instantiated using proposals from SDKs or CLI. It is registered and deployed by the peer at start-up.

System chaincodes can be linked to a peer in two ways: statically and dynamically using Go plugins. This tutorial will outline how to develop and load system chaincodes as plugins.

Developing Plugins

A system chaincode is a program written in Go and loaded using the Go plugin package.

A plugin includes a main package with exported symbols and is built with the command go build -buildmode=plugin.

Every system chaincode must implement the Chaincode Interface and export a constructor method that matches the signature func New() shim.Chaincode in the main package. An example can be found in the repository at examples/plugin/scc.

Existing chaincodes such as the QSCC can also serve as templates for certain features - such as access control - that are typically implemented through system chaincodes. The existing system chaincodes also serve as a reference for best-practices on things like logging and testing.

注解

On imported packages: the Go standard library requires that a plugin must include the same version of imported packages as the host application (fabric, in this case)

Configuring Plugins

Plugins are configured in the chaincode.systemPlugin section in core.yaml:

chaincode:
  systemPlugins:
    - enabled: true
      name: mysyscc
      path: /opt/lib/syscc.so
      invokableExternal: true
      invokableCC2CC: true

A system chaincode must also be whitelisted in the chaincode.system section in core.yaml:

chaincode:
  system:
    mysyscc: enable

Videos

视频

Refer to the Hyperledger Fabric channel on YouTube

参考YouTube上的Hyperledger Fabric频道



This collection contains developers demonstrating various v1 features and components such as: ledger, channels, gossip, SDK, chaincode, MSP, and more...

此专辑包含开发人员演示各种V1特征的和组件,如:账本、通道、gossip、SDK、链码、MSP和更多…

Operations Guides

Upgrading from v1.0.x

At a high level, upgrading a Fabric network to v1.1 can be performed by following these steps:

  • Upgrade binaries for orderers, peers, and fabric-ca. These upgrades may be done in parallel.
  • Upgrade client SDKs.
  • Enable v1.1 channel capability requirements.
  • (Optional) Upgrade the Kafka cluster.

To help understand this process, we’ve created the Upgrading Your Network Components tutorial that will take you through most of the major upgrade steps, including upgrading peers, orderers, as well as enabling capability requirements.

Because our tutorial leverages the Building Your First Network - 构建你的第一个网络 (BYFN) sample, it has certain limitations (it does not use Fabric CA, for example). Therefore we have included a section at the end of the tutorial that will show how to upgrade your CA, Kafka clusters, CouchDB, Zookeeper, vendored chaincode shims, and Node SDK clients.

If you want to learn more about capability requirements, click here.

Updating a Channel Configuration

What is a Channel Configuration?

Channel configurations contain all of the information relevant to the administration of a channel. Most importantly, the channel configuration specifies which organizations are members of channel, but it also includes other channel-wide configuration information such as channel access policies and block batch sizes.

This configuration is stored on the ledger in a block, and is therefore known as a configuration (config) block. Configuration blocks contain a single configuration. The first of these blocks is known as the “genesis block” and contains the initial configuration required to bootstrap a channel. Each time the configuration of a channel changes it is done through a new configuration block, with the latest configuration block representing the current channel configuration. Orderers and peers keep the current channel configuration in memory to facilitate all channel operations such as cutting a new block and validating block transactions.

Because configurations are stored in blocks, updating a config happens through a process called a “configuration transaction” (even though the process is a little different from a normal transaction). Updating a config is a process of pulling the config, translating into a format that humans can read, modifying it and then submitting it for approval.

For a more in-depth look at the process for pulling a config and translating it into JSON, check out Adding an Org to a Channel. In this doc, we’ll be focusing on the different ways you can edit a config and the process for getting it signed.

Editing a Config

Channels are highly configurable, but not infinitely so. Different configuration elements have different modification policies (which specify the group of identities required to sign the config update).

To see the scope of what’s possible to change it’s important to look at a config in JSON format. The Adding an Org to a Channel tutorial generates one, so if you’ve gone through that doc you can simply refer to it. For those who have not, we’ll provide one here (for ease of readability, it might be helpful to put this config into a viewer that supports JSON folding, like atom or Visual Studio).

Click here to see the config

{
"channel_group": {
  "groups": {
    "Application": {
      "groups": {
        "Org1MSP": {
          "mod_policy": "Admins",
          "policies": {
            "Admins": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org1MSP",
                        "role": "ADMIN"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Readers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org1MSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Writers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org1MSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            }
          },
          "values": {
            "AnchorPeers": {
              "mod_policy": "Admins",
              "value": {
                "anchor_peers": [
                  {
                    "host": "peer0.org1.example.com",
                    "port": 7051
                  }
                ]
              },
              "version": "0"
            },
            "MSP": {
              "mod_policy": "Admins",
              "value": {
                "config": {
                  "admins": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHRENDQWIrZ0F3SUJBZ0lRSWlyVmg3NVcwWmh0UjEzdmltdmliakFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NUzVsZUdGdGNHeGxMbU52YlRBZUZ3MHhOekV4TWpreE9USTBNRFphRncweU56RXhNamN4T1RJME1EWmEKTUZzeEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVI4d0hRWURWUVFEREJaQlpHMXBia0J2Y21jeExtVjRZVzF3YkdVdVkyOXRNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNkdVeDlpczZ0aG1ZRE9tMmVHSlA5eW1yaXJYWE1Cd0oKQmVWb1Vpak5haUdsWE03N2NsSE5aZjArMGFjK2djRU5lMzQweGExZVFnb2Q0YjVFcmQrNmtxTk5NRXN3RGdZRApWUjBQQVFIL0JBUURBZ2VBTUF3R0ExVWRFd0VCL3dRQ01BQXdLd1lEVlIwakJDUXdJb0FnWWdoR2xCMjBGWmZCCllQemdYT280czdkU1k1V3NKSkRZbGszTDJvOXZzQ013Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnYmlEWDVTMlIKRTBNWGRobDZFbmpVNm1lTEJ0eXNMR2ZpZXZWTlNmWW1UQVVDSUdVbnROangrVXZEYkZPRHZFcFRVTm5MUHp0Qwp5ZlBnOEhMdWpMaXVpaWFaCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                  ],
                  "crypto_config": {
                    "identity_identifier_hash_function": "SHA256",
                    "signature_hash_family": "SHA2"
                  },
                  "name": "Org1MSP",
                  "root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNRekNDQWVxZ0F3SUJBZ0lSQU03ZVdTaVM4V3VVM2haMU9tR255eXd3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekV1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGN4TVRJNU1Ua3lOREEyV2hjTk1qY3hNVEkzTVRreU5EQTIKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NUzVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1TNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQkJiTTVZS3B6UmlEbDdLWWFpSDVsVnBIeEl1TDEyaUcyWGhkMHRpbEg3MEljMGFpRUh1dG9rTkZsUXAzTWI0Zgpvb0M2bFVXWnRnRDJwMzZFNThMYkdqK2pYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSUdJSVJwUWR0QldYd1dEODRGenEKT0xPM1VtT1ZyQ1NRMkpaTnk5cVBiN0FqTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUdlS2VZL1BsdGlWQTRPSgpRTWdwcDRvaGRMcGxKUFpzNERYS0NuOE9BZG9YQWlCK2g5TFdsR3ZsSDdtNkVpMXVRcDFld2ZESmxsZi9MZXczClgxaDNRY0VMZ3c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                  ],
                  "tls_root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWZDZ0F3SUJBZ0lSQUtsNEFQWmV6dWt0Nk8wYjRyYjY5Y0F3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpFdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRjeE1USTVNVGt5TkRBMldoY05NamN4TVRJM01Ua3kKTkRBMldqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTVM1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCSnNpQXVjYlcrM0lqQ2VaaXZPakRiUmFyVlRjTW9TRS9mSnQyU0thR1d5bWQ0am5xM25MWC9vVApCVmpZb21wUG1QbGZ4R0VSWHl0UTNvOVZBL2hwNHBlalh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJSnlqZnFoa0FvY3oKdkRpNnNGSGFZL1Bvd2tPWkxPMHZ0VGdFRnVDbUpFalZNQW9HQ0NxR1NNNDlCQU1DQTBjQU1FUUNJRjVOVVdCVgpmSjgrM0lxU3J1NlFFbjlIa0lsQ0xDMnlvWTlaNHBWMnpBeFNBaUE5NWQzeDhBRXZIcUFNZnIxcXBOWHZ1TW5BCmQzUXBFa1gyWkh3ODZlQlVQZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                  ]
                },
                "type": 0
              },
              "version": "0"
            }
          },
          "version": "1"
        },
        "Org2MSP": {
          "mod_policy": "Admins",
          "policies": {
            "Admins": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org2MSP",
                        "role": "ADMIN"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Readers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org2MSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Writers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org2MSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            }
          },
          "values": {
            "AnchorPeers": {
              "mod_policy": "Admins",
              "value": {
                "anchor_peers": [
                  {
                    "host": "peer0.org2.example.com",
                    "port": 7051
                  }
                ]
              },
              "version": "0"
            },
            "MSP": {
              "mod_policy": "Admins",
              "value": {
                "config": {
                  "admins": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHVENDQWNDZ0F3SUJBZ0lSQU5Pb1lIbk9seU94dTJxZFBteStyV293Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGN4TVRJNU1Ua3lOREEyV2hjTk1qY3hNVEkzTVRreU5EQTIKV2pCYk1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFZk1CMEdBMVVFQXd3V1FXUnRhVzVBYjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaCk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQkh1M0ZWMGlqdFFzckpsbnBCblgyRy9ickFjTHFJSzgKVDFiSWFyZlpvSkhtQm5IVW11RTBhc1dyKzM4VUs0N3hyczNZMGMycGhFVjIvRnhHbHhXMUZubWpUVEJMTUE0RwpBMVVkRHdFQi93UUVBd0lIZ0RBTUJnTlZIUk1CQWY4RUFqQUFNQ3NHQTFVZEl3UWtNQ0tBSU1pSzdteFpnQVVmCmdrN0RPTklXd2F4YktHVGdLSnVSNjZqVmordHZEV3RUTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUQxaEtRdk8KVWxyWmVZMmZZY1N2YWExQmJPM3BVb3NxL2tZVElyaVdVM1J3QWlBR29mWmVPUFByWXVlTlk0Z2JCV2tjc3lpZgpNMkJmeXQwWG9NUThyT2VidUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                  ],
                  "crypto_config": {
                    "identity_identifier_hash_function": "SHA256",
                    "signature_hash_family": "SHA2"
                  },
                  "name": "Org2MSP",
                  "root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1pVXk5SGRSbXB5MDdsSjhRMlZNWXN3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGN4TVRJNU1Ua3lOREEyV2hjTk1qY3hNVEkzTVRreU5EQTIKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NaTVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQk50YW1PY1hyaGwrQ2hzYXNSeklNWjV3OHpPWVhGcXhQbGV0a3d5UHJrbHpKWE01Qjl4QkRRVWlWNldJS2tGSwo0Vmd5RlNVWGZqaGdtd25kMUNBVkJXaWpYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSU1pSzdteFpnQVVmZ2s3RE9OSVcKd2F4YktHVGdLSnVSNjZqVmordHZEV3RUTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFEQ3FFRmFqeU5IQmVaRworOUdWVkNFNWI1YTF5ZlhvS3lkemdLMVgyOTl4ZmdJZ05BSUUvM3JINHFsUE9HbjdSS3Yram9WaUNHS2t6L0F1Cm9FNzI4RWR6WmdRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                  ],
                  "tls_root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTakNDQWZDZ0F3SUJBZ0lSQU9JNmRWUWMraHBZdkdMSlFQM1YwQU13Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpJdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRjeE1USTVNVGt5TkRBMldoY05NamN4TVRJM01Ua3kKTkRBMldqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTWk1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTWk1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCTWZ1QTMwQVVBT1ZKRG1qVlBZd1lNbTlweW92MFN6OHY4SUQ5N0twSHhXOHVOOUdSOU84aVdFMgo5bllWWVpiZFB2V1h1RCszblpweUFNcGZja3YvYUV5alh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJRnk5VHBHcStQL08KUGRXbkZXdWRPTnFqVDRxOEVKcDJmbERnVCtFV2RnRnFNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJUUNZYlhSeApXWDZoUitPU0xBNSs4bFRwcXRMWnNhOHVuS3J3ek1UYXlQUXNVd0lnVSs5YXdaaE0xRzg3bGE0V0h4cmt5eVZ2CkU4S1ZsR09IVHVPWm9TMU5PT0U9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                  ]
                },
                "type": 0
              },
              "version": "0"
            }
          },
          "version": "1"
        },
        "Org3MSP": {
          "groups": {},
          "mod_policy": "Admins",
          "policies": {
            "Admins": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org3MSP",
                        "role": "ADMIN"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Readers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org3MSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Writers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "Org3MSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            }
          },
          "values": {
            "MSP": {
              "mod_policy": "Admins",
              "value": {
                "config": {
                  "admins": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHRENDQWIrZ0F3SUJBZ0lRQUlSNWN4U0hpVm1kSm9uY3FJVUxXekFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTXk1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NeTVsZUdGdGNHeGxMbU52YlRBZUZ3MHhOekV4TWpreE9UTTRNekJhRncweU56RXhNamN4T1RNNE16QmEKTUZzeEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVI4d0hRWURWUVFEREJaQlpHMXBia0J2Y21jekxtVjRZVzF3YkdVdVkyOXRNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFSFlkVFY2ZC80cmR4WFd2cm1qZ0hIQlhXc2lxUWxrcnQKZ0p1NzMxcG0yZDRrWU82aEd2b2tFRFBwbkZFdFBwdkw3K1F1UjhYdkFQM0tqTkt0NHdMRG5hTk5NRXN3RGdZRApWUjBQQVFIL0JBUURBZ2VBTUF3R0ExVWRFd0VCL3dRQ01BQXdLd1lEVlIwakJDUXdJb0FnSWNxUFVhM1VQNmN0Ck9LZmYvKzVpMWJZVUZFeVFlMVAyU0hBRldWSWUxYzB3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnUm5LRnhsTlYKSmppVGpkZmVoczRwNy9qMkt3bFVuUWVuNFkyUnV6QjFrbm9DSUd3dEZ1TEdpRFY2THZSL2pHVXR3UkNyeGw5ZApVNENCeDhGbjBMdXNMTkJYCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                  ],
                  "crypto_config": {
                    "identity_identifier_hash_function": "SHA256",
                    "signature_hash_family": "SHA2"
                  },
                  "name": "Org3MSP",
                  "root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNRakNDQWVtZ0F3SUJBZ0lRUkN1U2Y0RVJNaDdHQW1ydTFIQ2FZREFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTXk1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NeTVsZUdGdGNHeGxMbU52YlRBZUZ3MHhOekV4TWpreE9UTTRNekJhRncweU56RXhNamN4T1RNNE16QmEKTUhNeEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVJrd0Z3WURWUVFLRXhCdmNtY3pMbVY0WVcxd2JHVXVZMjl0TVJ3d0dnWURWUVFECkV4TmpZUzV2Y21jekxtVjRZVzF3YkdVdVkyOXRNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUUKZXFxOFFQMnllM08vM1J3UzI0SWdtRVdST3RnK3Zyc2pRY1BvTU42NEZiUGJKbmExMklNaVdDUTF6ZEZiTU9hSAorMUlrb21yY0RDL1ZpejkvY0M0NW9xTmZNRjB3RGdZRFZSMFBBUUgvQkFRREFnR21NQThHQTFVZEpRUUlNQVlHCkJGVWRKUUF3RHdZRFZSMFRBUUgvQkFVd0F3RUIvekFwQmdOVkhRNEVJZ1FnSWNxUFVhM1VQNmN0T0tmZi8rNWkKMWJZVUZFeVFlMVAyU0hBRldWSWUxYzB3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnTEgxL2xSZElWTVA4Z2FWeQpKRW01QWQ0SjhwZ256N1BVV2JIMzZvdVg4K1lDSUNPK20vUG9DbDRIbTlFbXhFN3ZnUHlOY2trVWd0SlRiTFhqCk5SWjBxNTdWCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                  ],
                  "tls_root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWZDZ0F3SUJBZ0lSQU9xc2JQQzFOVHJzclEvUUNpalh6K0F3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpNdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpNdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRjeE1USTVNVGt6T0RNd1doY05NamN4TVRJM01Ua3oKT0RNd1dqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTXk1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTXk1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCSVJTTHdDejdyWENiY0VLMmhxSnhBVm9DaDhkejNqcnA5RHMyYW9TQjBVNTZkSUZhVmZoR2FsKwovdGp6YXlndXpFalFhNlJ1MmhQVnRGM2NvQnJ2Ulpxalh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJQ2FkVERGa0JPTGkKblcrN2xCbDExL3pPbXk4a1BlYXc0MVNZWEF6cVhnZEVNQW9HQ0NxR1NNNDlCQU1DQTBjQU1FUUNJQlgyMWR3UwpGaG5NdDhHWXUweEgrUGd5aXQreFdQUjBuTE1Jc1p2dVlRaktBaUFLUlE5N2VrLzRDTzZPWUtSakR0VFM4UFRmCm9nTmJ6dTBxcThjbVhseW5jZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                  ]
                },
                "type": 0
              },
              "version": "0"
            }
          },
          "version": "0"
        }
      },
      "mod_policy": "Admins",
      "policies": {
        "Admins": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "MAJORITY",
              "sub_policy": "Admins"
            }
          },
          "version": "0"
        },
        "Readers": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "ANY",
              "sub_policy": "Readers"
            }
          },
          "version": "0"
        },
        "Writers": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "ANY",
              "sub_policy": "Writers"
            }
          },
          "version": "0"
        }
      },
      "version": "1"
    },
    "Orderer": {
      "groups": {
        "OrdererOrg": {
          "mod_policy": "Admins",
          "policies": {
            "Admins": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "OrdererMSP",
                        "role": "ADMIN"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Readers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "OrdererMSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            },
            "Writers": {
              "mod_policy": "Admins",
              "policy": {
                "type": 1,
                "value": {
                  "identities": [
                    {
                      "principal": {
                        "msp_identifier": "OrdererMSP",
                        "role": "MEMBER"
                      },
                      "principal_classification": "ROLE"
                    }
                  ],
                  "rule": {
                    "n_out_of": {
                      "n": 1,
                      "rules": [
                        {
                          "signed_by": 0
                        }
                      ]
                    }
                  },
                  "version": 0
                }
              },
              "version": "0"
            }
          },
          "values": {
            "MSP": {
              "mod_policy": "Admins",
              "value": {
                "config": {
                  "admins": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNDakNDQWJDZ0F3SUJBZ0lRSFNTTnIyMWRLTTB6THZ0dEdoQnpMVEFLQmdncWhrak9QUVFEQWpCcE1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4RnpBVkJnTlZCQU1URG1OaExtVjRZVzF3CmJHVXVZMjl0TUI0WERURTNNVEV5T1RFNU1qUXdObG9YRFRJM01URXlOekU1TWpRd05sb3dWakVMTUFrR0ExVUUKQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaGJpQkdjbUZ1WTJsegpZMjh4R2pBWUJnTlZCQU1NRVVGa2JXbHVRR1Y0WVcxd2JHVXVZMjl0TUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJCnpqMERBUWNEUWdBRTZCTVcvY0RGUkUvakFSenV5N1BjeFQ5a3pnZitudXdwKzhzK2xia0hZd0ZpaForMWRhR3gKKzhpS1hDY0YrZ0tpcVBEQXBpZ2REOXNSeTBoTEMwQnRacU5OTUVzd0RnWURWUjBQQVFIL0JBUURBZ2VBTUF3RwpBMVVkRXdFQi93UUNNQUF3S3dZRFZSMGpCQ1F3SW9BZ3o3bDQ2ZXRrODU0NFJEanZENVB6YjV3TzI5N0lIMnNUCngwTjAzOHZibkpzd0NnWUlLb1pJemowRUF3SURTQUF3UlFJaEFNRTJPWXljSnVyYzhVY2hkeTA5RU50RTNFUDIKcVoxSnFTOWVCK0gxSG5FSkFpQUtXa2h5TmI0akRPS2MramJIVmgwV0YrZ3J4UlJYT1hGaEl4ei85elI3UUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                  ],
                  "crypto_config": {
                    "identity_identifier_hash_function": "SHA256",
                    "signature_hash_family": "SHA2"
                  },
                  "name": "OrdererMSP",
                  "root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNMakNDQWRXZ0F3SUJBZ0lRY2cxUVZkVmU2Skd6YVU1cmxjcW4vakFLQmdncWhrak9QUVFEQWpCcE1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4RnpBVkJnTlZCQU1URG1OaExtVjRZVzF3CmJHVXVZMjl0TUI0WERURTNNVEV5T1RFNU1qUXdObG9YRFRJM01URXlOekU1TWpRd05sb3dhVEVMTUFrR0ExVUUKQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaGJpQkdjbUZ1WTJsegpZMjh4RkRBU0JnTlZCQW9UQzJWNFlXMXdiR1V1WTI5dE1SY3dGUVlEVlFRREV3NWpZUzVsZUdGdGNHeGxMbU52CmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJQTVI2MGdCcVJham9hS0U1TExRYjRIb28wN3QKYTRuM21Ncy9NRGloQVQ5YUN4UGZBcDM5SS8wMmwvZ2xiMTdCcEtxZGpGd0JKZHNuMVN6ZnQ3NlZkTitqWHpCZApNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WSFNVRUNEQUdCZ1JWSFNVQU1BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdLUVlEVlIwT0JDSUVJTSs1ZU9uclpQT2VPRVE0N3crVDgyK2NEdHZleUI5ckU4ZERkTi9MMjV5Yk1Bb0cKQ0NxR1NNNDlCQU1DQTBjQU1FUUNJQVB6SGNOUmQ2a3QxSEdpWEFDclFTM0grL3R5NmcvVFpJa1pTeXIybmdLNQpBaUJnb1BVTTEwTHNsMVFtb2dlbFBjblZGZjJoODBXR2I3NGRIS2tzVFJKUkx3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
                  ],
                  "tls_root_certs": [
                    "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNORENDQWR1Z0F3SUJBZ0lRYWJ5SUl6cldtUFNzSjJacisvRVpXVEFLQmdncWhrak9QUVFEQWpCc01Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4R2pBWUJnTlZCQU1URVhSc2MyTmhMbVY0CllXMXdiR1V1WTI5dE1CNFhEVEUzTVRFeU9URTVNalF3TmxvWERUSTNNVEV5TnpFNU1qUXdObG93YkRFTE1Ba0cKQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdUQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjVERWTmhiaUJHY21GdQpZMmx6WTI4eEZEQVNCZ05WQkFvVEMyVjRZVzF3YkdVdVkyOXRNUm93R0FZRFZRUURFeEYwYkhOallTNWxlR0Z0CmNHeGxMbU52YlRCWk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQkVZVE9mdG1rTHdiSlRNeG1aVzMKZVdqRUQ2eW1UeEhYeWFQdTM2Y1NQWDlldDZyU3Y5UFpCTGxyK3hZN1dtYlhyOHM5K3E1RDMwWHl6OEh1OWthMQpSc1dqWHpCZE1BNEdBMVVkRHdFQi93UUVBd0lCcGpBUEJnTlZIU1VFQ0RBR0JnUlZIU1VBTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0tRWURWUjBPQkNJRUlJcjduNTVjTWlUdENEYmM5UGU0RFpnZ0ZYdHV2RktTdnBNYUhzbzAKSnpFd01Bb0dDQ3FHU000OUJBTUNBMGNBTUVRQ0lGM1gvMGtQRkFVQzV2N25JVVh6SmI5Z3JscWxET05UeVg2QQpvcmtFVTdWb0FpQkpMbS9IUFZ0aVRHY2NldUZPZTE4SnNwd0JTZ1hxNnY1K1BobEdsbU9pWHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                  ]
                },
                "type": 0
              },
              "version": "0"
            }
          },
          "version": "0"
        }
      },
      "mod_policy": "Admins",
      "policies": {
        "Admins": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "MAJORITY",
              "sub_policy": "Admins"
            }
          },
          "version": "0"
        },
        "BlockValidation": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "ANY",
              "sub_policy": "Writers"
            }
          },
          "version": "0"
        },
        "Readers": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "ANY",
              "sub_policy": "Readers"
            }
          },
          "version": "0"
        },
        "Writers": {
          "mod_policy": "Admins",
          "policy": {
            "type": 3,
            "value": {
              "rule": "ANY",
              "sub_policy": "Writers"
            }
          },
          "version": "0"
        }
      },
      "values": {
        "BatchSize": {
          "mod_policy": "Admins",
          "value": {
            "absolute_max_bytes": 103809024,
            "max_message_count": 10,
            "preferred_max_bytes": 524288
          },
          "version": "0"
        },
        "BatchTimeout": {
          "mod_policy": "Admins",
          "value": {
            "timeout": "2s"
          },
          "version": "0"
        },
        "ChannelRestrictions": {
          "mod_policy": "Admins",
          "version": "0"
        },
        "ConsensusType": {
          "mod_policy": "Admins",
          "value": {
            "type": "solo"
          },
          "version": "0"
        }
      },
      "version": "0"
    }
  },
  "mod_policy": "",
  "policies": {
    "Admins": {
      "mod_policy": "Admins",
      "policy": {
        "type": 3,
        "value": {
          "rule": "MAJORITY",
          "sub_policy": "Admins"
        }
      },
      "version": "0"
    },
    "Readers": {
      "mod_policy": "Admins",
      "policy": {
        "type": 3,
        "value": {
          "rule": "ANY",
          "sub_policy": "Readers"
        }
      },
      "version": "0"
    },
    "Writers": {
      "mod_policy": "Admins",
      "policy": {
        "type": 3,
        "value": {
          "rule": "ANY",
          "sub_policy": "Writers"
        }
      },
      "version": "0"
    }
  },
  "values": {
    "BlockDataHashingStructure": {
      "mod_policy": "Admins",
      "value": {
        "width": 4294967295
      },
      "version": "0"
    },
    "Consortium": {
      "mod_policy": "Admins",
      "value": {
        "name": "SampleConsortium"
      },
      "version": "0"
    },
    "HashingAlgorithm": {
      "mod_policy": "Admins",
      "value": {
        "name": "SHA256"
      },
      "version": "0"
    },
    "OrdererAddresses": {
      "mod_policy": "/Channel/Orderer/Admins",
      "value": {
        "addresses": [
          "orderer.example.com:7050"
        ]
      },
      "version": "0"
    }
  },
  "version": "0"
},
"sequence": "3",
"type": 0
}

A config might look intimidating in this form, but once you study it you’ll see that it has a logical structure.

Beyond the definitions of the policies – defining who can do certain things at the channel level, and who has the permission to change who can change the config – channels also have other kinds of features that can be modified using a config update. Adding an Org to a Channel takes you through one of the most important – adding an org to a channel. Some other things that are possible to change with a config update include:

  • Batch Size. These parameters dictate the number and size of transactions in a block. No block will appear larger than absolute_max_bytes large or with more than max_message_count transactions inside the block. If it is possible to construct a block under preferred_max_bytes, then a block will be cut prematurely, and transactions larger than this size will appear in their own block.

    {
      "absolute_max_bytes": 102760448,
      "max_message_count": 10,
      "preferred_max_bytes": 524288
    }
    
  • Batch Timeout. The amount of time to wait after the first transaction arrives for additional transactions before cutting a block. Decreasing this value will improve latency, but decreasing it too much may decrease throughput by not allowing the block to fill to its maximum capacity.

    { "timeout": "2s" }
    
  • Channel Restrictions. The total number of channels the orderer is willing to allocate may be specified as max_count. This is primarily useful in pre-production environments with weak consortium ChannelCreation policies.

    {
     "max_count":1000
    }
    
  • Channel Creation Policy. Defines the policy value which will be set as the mod_policy for the Application group of new channels for the consortium it is defined in. The signature set attached to the channel creation request will be checked against the instantiation of this policy in the new channel to ensure that the channel creation is authorized. Note that this config vzlue is only set in the orderer system channel.

    {
    "type": 3,
    "value": {
      "rule": "ANY",
      "sub_policy": "Admins"
      }
    }
    
  • Kafka brokers. When ConsensusType is set to kafka, the brokers list enumerates some subset (or preferably all) of the Kafka brokers for the orderer to initially connect to at startup. Note that it is not possible to change your consensus type after it has been established (during the bootstrapping of the genesis block).

    {
      "brokers": [
        "kafka0:9092",
        "kafka1:9092",
        "kafka2:9092",
        "kafka3:9092"
      ]
    }
    
  • Anchor Peers Definition. Defines the location of the anchor peers for each Org.

    {
      "host": "peer0.org2.example.com",
        "port": 7051
    }
    
  • Hashing Structure. The block data is an array of byte arrays. The hash of the block data is computed as a Merkle tree. This value specifies the width of that Merkle tree. For the time being, this value is fixed to 4294967295 which corresponds to a simple flat hash of the concatenation of the block data bytes.

    { "width": 4294967295 }
    
  • Hashing Algorithm. The algorithm used for computing the hash values encoded into the blocks of the blockchain. In particular, this affects the data hash, and the previous block hash fields of the block. Note, this field currently only has one valid value (SHA256) and should not be changed.

    { "name": "SHA256" }
    
  • Block Validation. This policy specifies the signature requirements for a block to be considered valid. By default, it requires a signature from some member of the ordering org.

    {
      "type": 3,
      "value": {
        "rule": "ANY",
        "sub_policy": "Writers"
      }
    }
    
  • Orderer Address. A list of addresses where clients may invoke the orderer Broadcast and Deliver functions. The peer randomly chooses among these addresses and fails over between them for retrieving blocks.

    {
      "addresses": [
        "orderer.example.com:7050"
      ]
    }
    

Just as we add an Org by adding their artifacts and MSP information, you can remove them by reversing the process.

There is another important channel configuration (especially for v1.1) known as Capability Requirements. It has its own doc that can be found here.

Let’s say you want to edit the block batch size for the channel (because this is a single numeric field, it’s one of the easiest changes to make). First to make referencing the JSON path easy, we define it as an environment variable.

To establish this, take a look at your config, find what you’re looking for, and back track the path.

If you find batch size, for example, you’ll see that it’s a value of the Orderer. Orderer can be found under groups, which is under channel_group. The batch size value has a parameter under value of max_message_count.

Which would make the path this:

 export MAXBATCHSIZEPATH=".channel_group.groups.Orderer.values.BatchSize.value.max_message_count"

Next, display the value of that property:

jq "$MAXBATCHSIZEPATH" config.json

Which should return a value of 10 (in our sample network at least).

Now, let’s set the new batch size and display the new value:

 jq “$MAXBATCHSIZEPATH = 20” config.json > modified_config.json
 jq “$MAXBATCHSIZEPATH” modified_config.json

Once you’ve modified the JSON, it’s ready to be converted and submitted. The scripts and steps in Adding an Org to a Channel will take you through the process for converting the JSON, so let’s look at the process of submitting it.

Get the Necessary Signatures

Once you’ve successfully generated the protobuf file, it’s time to get it signed. To do this, you need to know the relevant policy for whatever it is you’re trying to change.

By default, editing the configuration of:

  • A particular org (for example, changing anchor peers) requires only the admin signature of that org.
  • The application (like who the member orgs are) requires a majority of the application organizations’ admins to sign.
  • The orderer requires a majority of the ordering organizations’ admins (of which there are by default only 1).
  • The top level channel group requires both the agreement of a majority of application organization admins and orderer organization admins.

If you have made changes to the default policies in the channel, you’ll need to compute your signature requirements accordingly.

Note: you may be able to script the signature collection, dependent on your application. In general, you may always collect more signatures than are required.

The actual process of getting these signatures will depend on how you’ve set up your system, but there are two main implementations. Currently, the Fabric command line defaults to a “pass it along” system. That is, the Admin of the Org proposing a config update sends the update to someone else (another Admin, typically) who needs to sign it. This Admin signs it (or doesn’t) and passes it along to the next Admin, and so on, until there are enough signatures for the config to be submitted.

This has the virtue of simplicity – when there are enough signatures, the last Admin can simply submit the config transaction (in Fabric, the peer channel update command includes a signature by default). However, this process will only be practical in smaller channels, since the “pass it along” method can be time consuming.

The other option is to submit the update to every Admin on a channel and wait for enough signatures to come back. These signatures can then be stitched together and submitted. This makes life a bit more difficult for the Admin who created the config update (forcing them to deal with a file per signer) but is the recommended workflow for users which are developing Fabric management applications.

Once the config has been added to the ledger, it will be a best practice to pull it and convert it to JSON to check to make sure everything was added correctly. This will also serve as a useful copy of the latest config.

Membership Service Providers (MSP)

成员服务提供者 (MSP)

The document serves to provide details on the setup and best practices for MSPs.

本文提供MSP的设置细节,致力于打造MSP的最佳实践。

Membership Service Provider (MSP) is a component that aims to offer an abstraction of a membership operation architecture.

成员服务提供者(MSP)是一个提供抽象化成员操作框架的组件。

In particular, MSP abstracts away all cryptographic mechanisms and protocols behind issuing and validating certificates, and user authentication. An MSP may define their own notion of identity, and the rules by which those identities are governed (identity validation) and authenticated (signature generation and verification).

特别地,MSP将颁发与校验证书,以及用户认证背后的所有密码学机制与协议都抽象了出来。一个MSP可以自己定义身份,以及身份的管理(身份验证)与认证(生成与验证签名)规则。

A Hyperledger Fabric blockchain network can be governed by one or more MSPs. This provides modularity of membership operations, and interoperability across different membership standards and architectures.

一个Hyperledger Fabric区块链网络可以被一个或多个MSP管理。这提供了模块化的成员操作,以及兼容不同成员标准与架构的互操作性。

In the rest of this document we elaborate on the setup of the MSP implementation supported by Hyperledger Fabric, and discuss best practices concerning its use.

接下来我们将详细说明在Hyperledger Fabric支持下的MSP的实现步骤,并讨论其使用方面的最佳实践方式。

MSP Configuration

MSP配置

To setup an instance of the MSP, its configuration needs to be specified locally at each peer and orderer (to enable peer, and orderer signing), and on the channels to enable peer, orderer, client identity validation, and respective signature verification (authentication) by and for all channel members.

要想初始化一个MSP实例,每一个peer节点和orderer节点都需要在本地指定其配置,并在channel上启用peer节点、orderer节点及client的身份的验证与各自的签名验证。注意channel上的全体成员均参与此过程。

Firstly, for each MSP a name needs to be specified in order to reference that MSP in the network (e.g. msp1, org2, and org3.divA). This is the name under which membership rules of an MSP representing a consortium, organization or organization division is to be referenced in a channel. This is also referred to as the MSP Identifier or MSP ID. MSP Identifiers are required to be unique per MSP instance. For example, shall two MSP instances with the same identifier be detected at the system channel genesis, orderer setup will fail.

首先,为了在网络中引用MSP,每个MSP都需要一个特定的名字(例如msp1、org2、org3.divA)。在一个channel中,当MSP的成员管理规则表示一个团体,组织或组织分工时,该名称会被引用。这又被成为 MSP标识符MSP ID。对于每个MSP实例,MSP标识符都必须独一无二。举个例子:系统channel创建时如果检测到两个MSP有相同的标识符,那么orderer节点的启动将以失败告终。

In the case of default implementation of MSP, a set of parameters need to be specified to allow for identity (certificate) validation and signature verification. These parameters are deduced by RFC5280, and include:

  • A list of self-signed (X.509) certificates to constitute the root of trust
  • A list of X.509 certificates to represent intermediate CAs this provider considers for certificate validation; these certificates ought to be certified by exactly one of the certificates in the root of trust; intermediate CAs are optional parameters
  • A list of X.509 certificates with a verifiable certificate path to exactly one of the certificates of the root of trust to represent the administrators of this MSP; owners of these certificates are authorized to request changes to this MSP configuration (e.g. root CAs, intermediate CAs)
  • A list of Organizational Units that valid members of this MSP should include in their X.509 certificate; this is an optional configuration parameter, used when, e.g., multiple organisations leverage the same root of trust, and intermediate CAs, and have reserved an OU field for their members
  • A list of certificate revocation lists (CRLs) each corresponding to exactly one of the listed (intermediate or root) MSP Certificate Authorities; this is an optional parameter
  • A list of self-signed (X.509) certificates to constitute the TLS root of trust for TLS certificate.
  • A list of X.509 certificates to represent intermediate TLS CAs this provider considers; these certificates ought to be certified by exactly one of the certificates in the TLS root of trust; intermediate CAs are optional parameters.

在MSP的默认情况下,身份(证书)验证与签名验证需要指定一组参数。这些参数推导自 RFC5280,具体包括:

  • 一个自签名的证书列表(满足X.509标准)以构成 信任源
  • 一个用于表示该MSP验证过的中间CA的X.509的证书列表,用于证书的校验。这些证书应该被信任源的一个证书所认证;中间的CA则是可选参数
  • 一个具有可验证路径的X.509证书列表(该路径通往信任源的一个证书),以表示该MSP的管理员。这些证书的所有者对MSP配置的更改要求都是经过授权的(例如根CA,中间CA)
  • 一个组织单元列表,该MSP的合法成员应该将其包含进他们的X.509证书。这是一个可选的配置参数,举个例子:当多个组织使用相同信任源、中间CA以及组织为他们的成员保留了一个OU区的时候,会配置此参数
  • 一个证书吊销列表(CRLs)的清单,清单的每一项对应于一个已登记的(中间的或根)MSP证书颁发机构(CA),这是一个可选的参数
  • 一个自签名的证书列表(满足X.509标准)以构成 TLS信任源 ,服务于TLS证书
  • 一个表示该provider关注的中间TLS CA的X.509证书列表。这些证书应该被TLS信任源的一个证书所认证;中间的CA则是可选参数

Valid identities for this MSP instance are required to satisfy the following conditions:

  • They are in the form of X.509 certificates with a verifiable certificate path to exactly one of the root of trust certificates;
  • They are not included in any CRL;
  • And they list one or more of the Organizational Units of the MSP configuration in the OU field of their X.509 certificate structure.

对于该MSP实例,有效的 身份应符合以下条件:

  • 它们应符合X.509证书标准,且具有一条可验证的路径(该路径通往信任源的一个证书)
  • 它们没有包含在任何CRL中
  • 它们 列出 了一个或多个MSP配置的组织单元(列出的位置是它们X.509证书结构的OU区内)。

For more information on the validity of identities in the current MSP implementation, we refer the reader to MSP Identity Validity Rules.

关于当前MSP实现过程中身份验证的更多信息,我们隆重推荐各位读者阅读:doc:msp-identity-validity-rules.

In addition to verification related parameters, for the MSP to enable the node on which it is instantiated to sign or authenticate, one needs to specify:

  • The signing key used for signing by the node (currently only ECDSA keys are supported), and
  • The node’s X.509 certificate, that is a valid identity under the verification parameters of this MSP.

除了验证相关参数外,为了使MSP可以对已实例化的节点进行签名或认证,需要指定:

  • 用于节点签名的签名密钥(目前只支持ECDSA密钥)
  • 节点的X.509证书,对MSP验证参数机制而言是一个有效的身份

It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.

值得注意的是,MSP身份永远不会过期;它们只能通过添加到合适的CRL上来被撤销。此外,现阶段不支持吊销TLS证书。

How to generate MSP certificates and their signing keys?

如何生成MSP证书及其签名密钥?

To generate X.509 certificates to feed its MSP configuration, the application can use Openssl. We emphasise that in Hyperledger Fabric there is no support for certificates including RSA keys.

要想生成X.509证书以满足MSP配置,应用程序可以使用 Openssl 。我们必须强调:在Hyperledger Fabric中,不支持包括RSA密钥在内的证书。

Alternatively one can use cryptogen tool, whose operation is explained in 入门指南.

另一个选择是使用 cryptogen 工具,其操作方法详见 入门指南

Hyperledger Fabric CA can also be used to generate the keys and certificates needed to configure an MSP.

Hyperledger Fabric CA 也可用于生成配置MSP所需的密钥及证书。

MSP setup on the peer & orderer side

peer&orderer侧 MSP 的设置

To set up a local MSP (for either a peer or an orderer), the administrator should create a folder (e.g. $MY_PATH/mspconfig) that contains six subfolders and a file:

  1. a folder admincerts to include PEM files each corresponding to an administrator certificate
  2. a folder cacerts to include PEM files each corresponding to a root CA’s certificate
  3. (optional) a folder intermediatecerts to include PEM files each corresponding to an intermediate CA’s certificate
  4. (optional) a file config.yaml to configure the supported Organizational Units and identity classifications (see respective sections below).
  5. (optional) a folder crls to include the considered CRLs
  6. a folder keystore to include a PEM file with the node’s signing key; we emphasise that currently RSA keys are not supported
  7. a folder signcerts to include a PEM file with the node’s X.509 certificate
  8. (optional) a folder tlscacerts to include PEM files each corresponding to a TLS root CA’s certificate
  9. (optional) a folder tlsintermediatecerts to include PEM files each corresponding to an intermediate TLS CA’s certificate

要想(为peer节点或orderer节点)建立本地MSP,管理员应创建一个文件夹(如$MY_PATH/mspconfig)并在其下包含6个子文件夹与一个文件:

  1. 文件夹admincerts包含如下PEM文件:每个PEM文件对应于一个管理员证书
  2. 文件夹cacerts包含如下PEM文件:每个PEM文件对应于一个根CA的证书
  3. (可选)文件夹intermediatecerts包含如下PEM文件:每个PEM文件对应于一个中间CA的证书
  4. (可选)文件config.yaml包含相关OU的信息;后者作为<Certificate,OrganizationalUnitIdentifier>(一个被称为OrganizationalUnitIdentifiers的yaml数组的项)的一部分被定义;其中Certificate表示通往(根或中间)CA的证书的相对路径,这些CA用于为组织成员发证(如./cacerts/cacert.pem);OrganizationalUnitIdentifier表示预期会出现在X.509证书中的实际字符串(如“COP”)
  5. (可选)文件夹crls包含相关CRL
  6. 文件夹keystore包含一个PEM文件及节点的签名密钥;我们必须强调:现阶段还不支持RSA密钥
  7. 文件夹signcerts包含一个PEM文件及节点的X.509证书
  8. (可选)文件夹tlscacerts包含如下PEM文件:每个PEM文件对应于一个根TLS根CA的证书
  9. (可选)文件夹tlsintermediatecerts包含如下PEM文件:每个PEM文件对应于一个中间TLS CA的证书

In the configuration file of the node (core.yaml file for the peer, and orderer.yaml for the orderer), one needs to specify the path to the mspconfig folder, and the MSP Identifier of the node’s MSP. The path to the mspconfig folder is expected to be relative to FABRIC_CFG_PATH and is provided as the value of parameter mspConfigPath for the peer, and LocalMSPDir for the orderer. The identifier of the node’s MSP is provided as a value of parameter localMspId for the peer and LocalMSPID for the orderer. These variables can be overridden via the environment using the CORE prefix for peer (e.g. CORE_PEER_LOCALMSPID) and the ORDERER prefix for the orderer (e.g. ORDERER_GENERAL_LOCALMSPID). Notice that for the orderer setup, one needs to generate, and provide to the orderer the genesis block of the system channel. The MSP configuration needs of this block are detailed in the next section.

在节点的配置文件中(对peer节点而言配置文件是core.yaml文件,对orderer节点而言则是orderer.yaml文件),我们需要指定到mspconfig文件夹的路径,以及节点的MSP的MSP标识符。到mspconfig文件夹的路径预期是一个对FABRIC_CFG_PATH的相对路径,且会作为参数 mspConfigPathLocalMSPDir 的值分别提供给peer节点和orderer节点。节点的MSP的MSP标识符则会作为参数 localMspIdLocalMSPID 的值分别提供给peer节点和orderer节点。运行环境可以通过为peer使用CORE前缀(例如CORE_PEER_LOCALMSPID)及为orderer使用ORDERER前缀(例如 ORDERER_GENERAL_LOCALMSPID)对以上变量进行覆写。注意:对于orderer的设置,我们需要生成并为orderer提供系统channel的创世区块。MSP配置对该区块的需求详见后面的章节。

Reconfiguration of a “local” MSP is only possible manually, and requires that the peer or orderer process is restarted. In subsequent releases we aim to offer online/dynamic reconfiguration (i.e. without requiring to stop the node by using a node managed system chaincode).

对“本地”的MSP进行 重新配置 只能手动进行,且该过程需要重启peer节点和orderer节点。在随后的版本中我们计划提供在线/动态的重新配置的功能(通过使用一个由节点管理的系统chaincode,使得我们不必停止node)。

Organizational Units

组织单元

In order to configure the list of Organizational Units that valid members of this MSP should include in their X.509 certificate, the config.yaml file needs to specify the organizational unit identifiers. Here is an example:

为了能配置一系列的组织单元,MSP有效成员应该包含它们的X.509 认证, config.yaml 文件需要指明组织单元的标识。下面是例子:

OrganizationalUnitIdentifiers:
  - Certificate: "cacerts/cacert1.pem"
    OrganizationalUnitIdentifier: "commercial"
  - Certificate: "cacerts/cacert2.pem"
    OrganizationalUnitIdentifier: "administrators"

The above example declares two organizational unit identifiers: commercial and administrators. An MSP identity is valid if it carries at least one of these organizational unit identifiers. The Certificate field refers to the CA or intermediate CA certificate path under which identities, having that specific OU, should be validated. The path is relative to the MSP root folder and cannot be empty.

上面的例子声明了两个组织单元标识: commercialadministrators。 一个MSP实体如果承载了其中至少一个组织单元标识,它就是有效的。 指向识别码下的CA或者中间CA认证路径的 Certificate 字段,如果包含指定的OU,则是有效的。 路径是相对于MSP根目录不能为空。

Identity Classification

身份分类

The default MSP implementation allows to further classify identities into clients and peers, based on the OUs of their x509 certificates. An identity should be classified as a client if it submits transactions, queries peers, etc. An identity should be classified as a peer if it endorses or commits transactions. In order to define clients and peers of a given MSP, the config.yaml file needs to be set appropriately. Here is an example:

默认的MSP实现允许进一步进行身份分类为用户和节点,基于他们的x509认证的组织单位。 一个身份可以被分类成用户**client**如果它提交交易,查询节点等。 一个身份可以被分类成节点**peer**如果它进行背书或者确认交易。 为了对给定的MSP定义用户和节点,config.yaml 文件需要正确的设置。下面是一个例子:

NodeOUs:
  Enable: true
  ClientOUIdentifier:
    Certificate: "cacerts/cacert.pem"
    OrganizationalUnitIdentifier: "client"
  PeerOUIdentifier:
    Certificate: "cacerts/cacert.pem"
    OrganizationalUnitIdentifier: "peer"

As shown above, the NodeOUs.Enable is set to true, this enables the identify classification. Then, client (peer) identifiers are defined by setting the following properties for the NodeOUs.ClientOUIdentifier (NodeOUs.PeerOUIdentifier) key:

a. OrganizationalUnitIdentifier: Set this to the value that matches the OU that the x509 certificate of a client (peer) should contain. b. Certificate: Set this to the CA or CA under which client (peer) identities should be validated. The field is relative to the MSP root folder. It can be empty, meaning that the identity’s x509 certificate can be validated under any CA defined in the MSP configuration.
如上所示,NodeOUs.Enable 设置成 true,来允许身份分类。接下来,通过为``NodeOUs.ClientOUIdentifier`` (NodeOUs.PeerOUIdentifier)键来设置以下的属性来对用户(节点)身份进行定义:
  1. OrganizationalUnitIdentifier: 设置为与用户(节点)的x509认证包含的OU相匹配的值
  2. Certificate: 设置为CA或者用户(节点)身份下有效的CA。这个相对于MSP根目录的相对路径。可以为空,表示该身份的x509证书在MSP配置中定义的任何CA下都是有效的。

When the classification is enabled, MSP administrators need to be clients of that MSP, meaning that their x509 certificates need to carry the OU that identifies the clients. Notice also that, an identity can be either a client or a peer. The two classifications are mutually exclusive. If an identity is neither a client nor a peer, the validation will fail.

当启用了分类,MSP管理员需要是MSP的客户,表示他们的x509证书需要携带OU识别客户。

Finally, notice that for upgraded environments the 1.1 channel capability needs to be enabled before identify classification can be used.

最后,需要注意在升级的环境中,1.1通道功能需要先启动,之后再启动身份分类。

Channel MSP setup

Channel MSP 的设置

At the genesis of the system, verification parameters of all the MSPs that appear in the network need to be specified, and included in the system channel’s genesis block. Recall that MSP verification parameters consist of the MSP identifier, the root of trust certificates, intermediate CA and admin certificates, as well as OU specifications and CRLs. The system genesis block is provided to the orderers at their setup phase, and allows them to authenticate channel creation requests. Orderers would reject the system genesis block, if the latter includes two MSPs with the same identifier, and consequently the bootstrapping of the network would fail.

在系统起始阶段,我们需要指定在网络中出现的所有MSP的验证参数,且这些参数需要在系统channel的创世区块中指定。前文我们提到,MSP的验证参数包括MSP标识符、信任源证书、中间CA和管理员的证书,以及OU说明和CLR。系统的创世区块会在orderer节点设置阶段被提供给它们,且允许它们批准创建channel的请求。如果创世区块包含两个有相同标识符的MSP,那么orderer节点将拒绝系统创世区块,导致网络引导程序执行失败。

For application channels, the verification components of only the MSPs that govern a channel need to reside in the channel’s genesis block. We emphasise that it is the responsibility of the application to ensure that correct MSP configuration information is included in the genesis blocks (or the most recent configuration block) of a channel prior to instructing one or more of their peers to join the channel.

对于应用程序channel,创世区块中需要包含管理channel的那部分MSP的验证组件。我们在此强调,应用程序要肩负以下责任 :在令一个或多个peer节点加入到channel中之前,确保channel的创世区块(或最新的配置区块)包含正确的MSP配置信息。

When bootstrapping a channel with the help of the configtxgen tool, one can configure the channel MSPs by including the verification parameters of MSP in the mspconfig folder, and setting that path in the relevant section in configtx.yaml.

在configtxgen工具的帮助下引导架设channel时,我们这样来配置channel MSP:将MSP的验证参数加入mspconfig文件夹,并将该路径加入到 configtx.yaml 文件的相关部分。

Reconfiguration of an MSP on the channel, including announcements of the certificate revocation lists associated to the CAs of that MSP is achieved through the creation of a config_update object by the owner of one of the administrator certificates of the MSP. The client application managed by the admin would then announce this update to the channels in which this MSP appears.

要想对channel中MSP的 重新配置 ,包括发布与MSP的CA相关的证书吊销列表,需要通过MSP管理员证书的所有者创建config_update对象来实现。由管理员管理的客户端应用将向该MSP所在的各个channel发布更新。

Best Practices

最佳实践

In this section we elaborate on best practices for MSP configuration in commonly met scenarios.

在本节,我们将详述一般情况下MSP配置的最佳实践。

1) Mapping between organizations/corporations and MSPs

1)为组织与MSP建立映射

We recommend that there is a one-to-one mapping between organizations and MSPs. If a different mapping type of mapping is chosen, the following needs to be to considered:

我们建议组织和MSP之间建立一对一映射。如果选择其他类型的映射,那么需要注意以下几点:

  • One organization employing various MSPs. This corresponds to the case of an organization including a variety of divisions each represented by its MSP, either for management independence reasons, or for privacy reasons. In this case a peer can only be owned by a single MSP, and will not recognize peers with identities from other MSPs as peers of the same organization. The implication of this is that peers may share through gossip organization-scoped data with a set of peers that are members of the same subdivision, and NOT with the full set of providers constituting the actual organization.
  • Multiple organizations using a single MSP. This corresponds to a case of a consortium of organisations that are governed by similar membership architecture. One needs to know here that peers would propagate organization-scoped messages to the peers that have an identity under the same MSP regardless of whether they belong to the same actual organization. This is a limitation of the granularity of MSP definition, and/or of the peer’s configuration.
  • 一个组织对应多个MSP。 这对应于下面这种情况:(无论出于独立管理的原因还是私人原因)一个组织有各种各样的部门,每个部门以其MSP为代表。在这种情况下,一个peer节点只能被单个MSP拥有,且不会识别相同组织内标识在其他MSP的节点。这就是说,peer节点可以与相同子分支下的一系列其他peer节点共享组织数据,而不是所有构成组织的节点。
  • 多个组织对应一个MSP。 这对应于下面这种情况:一个由相似成员结构所管理的组织联盟。这时,peer节点可以与相同MSP下的其他节点互发组织范围的数据,节点是否属于同一组织并不重要。这对于MSP的定义及peer节点的配置是个限制。

2) One organization has different divisions (say organizational units), to which it wants to grant access to different channels.

2) 一个组织有多个分支(称为组织单元),各个分支连接到组织想要获取访问权限的不同channel.

Two ways to handle this:

有两个方法进行处理:

  • Define one MSP to accommodate membership for all organization’s members. Configuration of that MSP would consist of a list of root CAs, intermediate CAs and admin certificates; and membership identities would include the organizational unit (OU) a member belongs to. Policies can then be defined to capture members of a specific OU, and these policies may constitute the read/write policies of a channel or endorsement policies of a chaincode. A limitation of this approach is that gossip peers would consider peers with membership identities under their local MSP as members of the same organization, and would consequently gossip with them organisation-scoped data (e.g. their status).
  • Defining one MSP to represent each division. This would involve specifying for each division, a set of certificates for root CAs, intermediate CAs, and admin Certs, such that there is no overlapping certification path across MSPs. This would mean that, for example, a different intermediate CA per subdivision is employed. Here the disadvantage is the management of more than one MSPs instead of one, but this circumvents the issue present in the previous approach. One could also define one MSP for each division by leveraging an OU extension of the MSP configuration.
  • 定义一个MSP来容纳所有组织的全部成员。 MSP的配置包含一个根CA、中间CA和管理员证书的列表;成员身份会包含一个组织单元(OU)的所属关系。接下来可以定义用于获取特定OU成员的策略,这些策略可以建立channel的读写策略或者chaincode的背书策略。这种方法的局限是gossip peer节点会本地MSP下的其他peer节点当做相同组织内的成员,并与之分享组织范围内的数据。
  • 定义一个MSP来表示每个分支。 这需要为每个分支引入一组根CA证书、中间CA证书和管理员证书,这样每条通往MSP的路径都不会重叠。这意味着,每个子分支的不同中间CA都会被利用起来。这样做的缺点是要管理多个MSP,不过这避免了前面方法出现的问题。我们也可以利用MSP配置的OU扩展来为每个分支定义一个MSP。

3) Separating clients from peers of the same organization.

3) 将客户从相同组织的peer节点中分离.

In many cases it is required that the “type” of an identity is retrievable from the identity itself (e.g. it may be needed that endorsements are guaranteed to have derived by peers, and not clients or nodes acting solely as orderers).

多数情况下,一个身份的“类型”被要求能够从身份本身获取(可能当背书要保证:背书节点由peers充当,而非客户端或者仅充当orders的节点时,需要该特性支持)。

There is limited support for such requirements.

下面是对这些要求的有限支持。

One way to allow for this separation is to to create a separate intermediate CA for each node type - one for clients and one for peers/orderers; and configure two different MSPs - one for clients and one for peers/orderers. Channels this organization should be accessing would need to include both MSPs, while endorsement policies will leverage only the MSP that refers to the peers. This would ultimately result in the organization being mapped to two MSP instances, and would have certain consequences on the way peers and clients interact.

一种支持这种分离的方法是为每个节点类型创建一个分离的中间CA:一个为客户,一个为peer节点或orderer节点;并配置两个不同的MSP:一个为客户,一个为peer节点或orderer节点。该组织要访问的channel需要同时包含两个MSP,不过背书策略将只用到服务peer节点的MSP。这最终导致组织与两个MSP实例建立映射,并对peer节点与客户间的交流产生特定影响。

Gossip would not be drastically impacted as all peers of the same organization would still belong to one MSP. Peers can restrict the execution of certain system chaincodes to local MSP based policies. For example, peers would only execute “joinChannel” request if the request is signed by the admin of their local MSP who can only be a client (end-user should be sitting at the origin of that request). We can go around this inconsistency if we accept that the only clients to be members of a peer/orderer MSP would be the administrators of that MSP.

由于所以同一组织的peer节点仍属于相同的MSP,所以通讯不会受到严重影响。peer节点可以把特定系统chaincode的执行控制在本地MSP的策略范围内。例如:只有请求被本地MSP的管理员签署(其只能是一个客户),peer节点才会执行“joinChannel”的请求(终端用户应该处于该请求的起点)。如果我们接受这样一个前提:只有客户成为MSP peer节点或orderer节点的一员,才能成员MSP的管理员,那么我们就可以绕过这个矛盾。

Another point to be considered with this approach is that peers authorize event registration requests based on membership of request originator within their local MSP. Clearly, since the originator of the request is a client, the request originator is always doomed to belong to a different MSP than the requested peer and the peer would reject the request.

该方法还要注意,peer节点授权事件登记的请求,是基于本地MSP内请求的发起成员。简而言之,由于请求的发起者是一个客户,故请求发起者必定隶属于和被请求的peer节点不同的MSP,这会导致peer节点拒绝该请求。

4) Admin and CA certificates.

4) 管理员和CA的证书.

It is important to set MSP admin certificates to be different than any of the certificates considered by the MSP for root of trust, or intermediate CAs. This is a common (security) practice to separate the duties of management of membership components from the issuing of new certificates, and/or validation of existing ones.

将MSP管理员证书设置得与任何MSP,或中间CA处理的其他证书都不同是很重要的。这是一种常见的安全做法,即将成员管理的责任从发行新证书与验证已有证书中拆分出来。

5) Blacklisting an intermediate CA.

5) 将中间CA加入黑名单.

As mentioned in previous sections, reconfiguration of an MSP is achieved by reconfiguration mechanisms (manual reconfiguration for the local MSP instances, and via properly constructed config_update messages for MSP instances of a channel). Clearly, there are two ways to ensure an intermediate CA considered in an MSP is no longer considered for that MSP’s identity validation:

就像上文所述,重新配置MSP是通过一种重配置机制完成的(手动重新配置本地MSP实例,并通过channel合理构建发送给MSP实例的config_update消息)。显然,我们有两种方法保证一个中间CA被MSP身份验证机制彻底忽视:

  1. Reconfigure the MSP to no longer include the certificate of that intermediate CA in the list of trusted intermediate CA certs. For the locally configured MSP, this would mean that the certificate of this CA is removed from the intermediatecerts folder.
  2. Reconfigure the MSP to include a CRL produced by the root of trust which denounces the mentioned intermediate CA’s certificate.
  1. 重新配置MSP并使它的信任中间CA证书列表不再包含该中间CA的证书。对于本地重新配置的MSP,这意味着该CA的证书从intermediatecerts文件夹中被删除了。
  2. 重新配置MSP并使它包含由信任源产生的CRL,该CRL会通知MSP废止中间CA证书的使用。

In the current MSP implementation we only support method (1) as it is simpler and does not require blacklisting the no longer considered intermediate CA.

在目前的MSP实现中,我们只支持上述的第一个方法,因为它更加简单,且并不需要把早就不用考虑的中间CA列入黑名单。

6) CAs and TLS CAs

6) CA 和 TLS CA

MSP identities’ root CAs and MSP TLS certificates’ root CAs (and relative intermediate CAs) need to be declared in different folders. This is to avoid confusion between different classes of certificates. It is not forbidden to reuse the same CAs for both MSP identities and TLS certificates but best practices suggest to avoid this in production.

MSP 身份的根CA及MSP TLS证书的根CA(以及相关的中间CA)需要在不同的文件夹中声明。这是为了避免混淆不同等级的证书。且MSP身份与TLS证书都允许重用相同的CA,不过我们建议最好在实际中避免这样做。

Channel Configuration (configtx)

Shared configuration for a Hyperledger Fabric blockchain network is stored in a collection configuration transactions, one per channel. Each configuration transaction is usually referred to by the shorter name configtx.

Channel configuration has the following important properties:

  1. Versioned: All elements of the configuration have an associated version which is advanced with every modification. Further, every committed configuration receives a sequence number.
  2. Permissioned: Each element of the configuration has an associated policy which governs whether or not modification to that element is permitted. Anyone with a copy of the previous configtx (and no additional info) may verify the validity of a new config based on these policies.
  3. Hierarchical: A root configuration group contains sub-groups, and each group of the hierarchy has associated values and policies. These policies can take advantage of the hierarchy to derive policies at one level from policies of lower levels.

Anatomy of a configuration

Configuration is stored as a transaction of type HeaderType_CONFIG in a block with no other transactions. These blocks are referred to as Configuration Blocks, the first of which is referred to as the Genesis Block.

The proto structures for configuration are stored in fabric/protos/common/configtx.proto. The Envelope of type HeaderType_CONFIG encodes a ConfigEnvelope message as the Payload data field. The proto for ConfigEnvelope is defined as follows:

message ConfigEnvelope {
    Config config = 1;
    Envelope last_update = 2;
}

The last_update field is defined below in the Updates to configuration section, but is only necessary when validating the configuration, not reading it. Instead, the currently committed configuration is stored in the config field, containing a Config message.

message Config {
    uint64 sequence = 1;
    ConfigGroup channel_group = 2;
}

The sequence number is incremented by one for each committed configuration. The channel_group field is the root group which contains the configuration. The ConfigGroup structure is recursively defined, and builds a tree of groups, each of which contains values and policies. It is defined as follows:

message ConfigGroup {
    uint64 version = 1;
    map<string,ConfigGroup> groups = 2;
    map<string,ConfigValue> values = 3;
    map<string,ConfigPolicy> policies = 4;
    string mod_policy = 5;
}

Because ConfigGroup is a recursive structure, it has hierarchical arrangement. The following example is expressed for clarity in golang notation.

// Assume the following groups are defined
var root, child1, child2, grandChild1, grandChild2, grandChild3 *ConfigGroup

// Set the following values
root.Groups["child1"] = child1
root.Groups["child2"] = child2
child1.Groups["grandChild1"] = grandChild1
child2.Groups["grandChild2"] = grandChild2
child2.Groups["grandChild3"] = grandChild3

// The resulting config structure of groups looks like:
// root:
//     child1:
//         grandChild1
//     child2:
//         grandChild2
//         grandChild3

Each group defines a level in the config hierarchy, and each group has an associated set of values (indexed by string key) and policies (also indexed by string key).

Values are defined by:

message ConfigValue {
    uint64 version = 1;
    bytes value = 2;
    string mod_policy = 3;
}

Policies are defined by:

message ConfigPolicy {
    uint64 version = 1;
    Policy policy = 2;
    string mod_policy = 3;
}

Note that Values, Policies, and Groups all have a version and a mod_policy. The version of an element is incremented each time that element is modified. The mod_policy is used to govern the required signatures to modify that element. For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the mod_policy). For Values and Policies, modification is changing the Value and Policy fields respectively (or changing the mod_policy). Each element’s mod_policy is evaluated in the context of the current level of the config. Consider the following example mod policies defined at Channel.Groups["Application"] (Here, we use the golang map reference syntax, so Channel.Groups["Application"].Policies["policy1"] refers to the base Channel group’s Application group’s Policies map’s policy1 policy.)

  • policy1 maps to Channel.Groups["Application"].Policies["policy1"]
  • Org1/policy2 maps to Channel.Groups["Application"].Groups["Org1"].Policies["policy2"]
  • /Channel/policy3 maps to Channel.Policies["policy3"]

Note that if a mod_policy references a policy which does not exist, the item cannot be modified.

Configuration updates

Configuration updates are submitted as an Envelope message of type HeaderType_CONFIG_UPDATE. The Payload data of the transaction is a marshaled ConfigUpdateEnvelope. The ConfigUpdateEnvelope is defined as follows:

message ConfigUpdateEnvelope {
    bytes config_update = 1;
    repeated ConfigSignature signatures = 2;
}

The signatures field contains the set of signatures which authorizes the config update. Its message definition is:

message ConfigSignature {
    bytes signature_header = 1;
    bytes signature = 2;
}

The signature_header is as defined for standard transactions, while the signature is over the concatenation of the signature_header bytes and the config_update bytes from the ConfigUpdateEnvelope message.

The ConfigUpdateEnvelope config_update bytes are a marshaled ConfigUpdate message which is defined as follows:

message ConfigUpdate {
    string channel_id = 1;
    ConfigGroup read_set = 2;
    ConfigGroup write_set = 3;
}

The channel_id is the channel ID the update is bound for, this is necessary to scope the signatures which support this reconfiguration.

The read_set specifies a subset of the existing configuration, specified sparsely where only the version field is set and no other fields must be populated. The particular ConfigValue value or ConfigPolicy policy fields should never be set in the read_set. The ConfigGroup may have a subset of its map fields populated, so as to reference an element deeper in the config tree. For instance, to include the Application group in the read_set, its parent (the Channel group) must also be included in the read set, but, the Channel group does not need to populate all of the keys, such as the Orderer group key, or any of the values or policies keys.

The write_set specifies the pieces of configuration which are modified. Because of the hierarchical nature of the configuration, a write to an element deep in the hierarchy must contain the higher level elements in its write_set as well. However, for any element in the write_set which is also specified in the read_set at the same version, the element should be specified sparsely, just as in the read_set.

For example, given the configuration:

Channel: (version 0)
    Orderer (version 0)
    Appplication (version 3)
       Org1 (version 2)

To submit a configuration update which modifies Org1, the read_set would be:

Channel: (version 0)
    Application: (version 3)

and the write_set would be

Channel: (version 0)
    Application: (version 3)
        Org1 (version 3)

When the CONFIG_UPDATE is received, the orderer computes the resulting CONFIG by doing the following:

  1. Verifies the channel_id and read_set. All elements in the read_set must exist at the given versions.
  2. Computes the update set by collecting all elements in the write_set which do not appear at the same version in the read_set.
  3. Verifies that each element in the update set increments the version number of the element update by exactly 1.
  4. Verifies that the signature set attached to the ConfigUpdateEnvelope satisfies the mod_policy for each element in the update set.
  5. Computes a new complete version of the config by applying the update set to the current config.
  6. Writes the new config into a ConfigEnvelope which includes the CONFIG_UPDATE as the last_update field and the new config encoded in the config field, along with the incremented sequence value.
  7. Writes the new ConfigEnvelope into a Envelope of type CONFIG, and ultimately writes this as the sole transaction in a new configuration block.

When the peer (or any other receiver for Deliver) receives this configuration block, it should verify that the config was appropriately validated by applying the last_update message to the current config and verifying that the orderer-computed config field contains the correct new configuration.

Permitted configuration groups and values

Any valid configuration is a subset of the following configuration. Here we use the notation peer.<MSG> to define a ConfigValue whose value field is a marshaled proto message of name <MSG> defined in fabric/protos/peer/configuration.proto. The notations common.<MSG>, msp.<MSG>, and orderer.<MSG> correspond similarly, but with their messages defined in fabric/protos/common/configuration.proto, fabric/protos/msp/mspconfig.proto, and fabric/protos/orderer/configuration.proto respectively.

Note, that the keys {{org_name}} and {{consortium_name}} represent arbitrary names, and indicate an element which may be repeated with different names.

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Application":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                        "AnchorPeers":peer.AnchorPeers,
                    },
                },
            },
        },
        "Orderer":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                    },
                },
            },

            Values:map<string, *ConfigValue> {
                "ConsensusType":orderer.ConsensusType,
                "BatchSize":orderer.BatchSize,
                "BatchTimeout":orderer.BatchTimeout,
                "KafkaBrokers":orderer.KafkaBrokers,
            },
        },
        "Consortiums":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{consortium_name}}:&ConfigGroup{
                    Groups:map<string, *ConfigGroup> {
                        {{org_name}}:&ConfigGroup{
                            Values:map<string, *ConfigValue>{
                                "MSP":msp.MSPConfig,
                            },
                        },
                    },
                    Values:map<string, *ConfigValue> {
                        "ChannelCreationPolicy":common.Policy,
                    }
                },
            },
        },
    },

    Values: map<string, *ConfigValue> {
        "HashingAlgorithm":common.HashingAlgorithm,
        "BlockHashingDataStructure":common.BlockDataHashingStructure,
        "Consortium":common.Consortium,
        "OrdererAddresses":common.OrdererAddresses,
    },
}

Orderer system channel configuration

The ordering system channel needs to define ordering parameters, and consortiums for creating channels. There must be exactly one ordering system channel for an ordering service, and it is the first channel to be created (or more accurately bootstrapped). It is recommended never to define an Application section inside of the ordering system channel genesis configuration, but may be done for testing. Note that any member with read access to the ordering system channel may see all channel creations, so this channel’s access should be restricted.

The ordering parameters are defined as the following subset of config:

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Orderer":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                    },
                },
            },

            Values:map<string, *ConfigValue> {
                "ConsensusType":orderer.ConsensusType,
                "BatchSize":orderer.BatchSize,
                "BatchTimeout":orderer.BatchTimeout,
                "KafkaBrokers":orderer.KafkaBrokers,
            },
        },
    },

Each organization participating in ordering has a group element under the Orderer group. This group defines a single parameter MSP which contains the cryptographic identity information for that organization. The Values of the Orderer group determine how the ordering nodes function. They exist per channel, so orderer.BatchTimeout for instance may be specified differently on one channel than another.

At startup, the orderer is faced with a filesystem which contains information for many channels. The orderer identifies the system channel by identifying the channel with the consortiums group defined. The consortiums group has the following structure.

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Consortiums":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{consortium_name}}:&ConfigGroup{
                    Groups:map<string, *ConfigGroup> {
                        {{org_name}}:&ConfigGroup{
                            Values:map<string, *ConfigValue>{
                                "MSP":msp.MSPConfig,
                            },
                        },
                    },
                    Values:map<string, *ConfigValue> {
                        "ChannelCreationPolicy":common.Policy,
                    }
                },
            },
        },
    },
},

Note that each consortium defines a set of members, just like the organizational members for the ordering orgs. Each consortium also defines a ChannelCreationPolicy. This is a policy which is applied to authorize channel creation requests. Typically, this value will be set to an ImplicitMetaPolicy requiring that the new members of the channel sign to authorize the channel creation. More details about channel creation follow later in this document.

Application channel configuration

Application configuration is for channels which are designed for application type transactions. It is defined as follows:

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Application":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                        "AnchorPeers":peer.AnchorPeers,
                    },
                },
            },
        },
    },
}

Just like with the Orderer section, each organization is encoded as a group. However, instead of only encoding the MSP identity information, each org additionally encodes a list of AnchorPeers. This list allows the peers of different organizations to contact each other for peer gossip networking.

The application channel encodes a copy of the orderer orgs and consensus options to allow for deterministic updating of these parameters, so the same Orderer section from the orderer system channel configuration is included. However from an application perspective this may be largely ignored.

Channel creation

When the orderer receives a CONFIG_UPDATE for a channel which does not exist, the orderer assumes that this must be a channel creation request and performs the following.

  1. The orderer identifies the consortium which the channel creation request is to be performed for. It does this by looking at the Consortium value of the top level group.
  2. The orderer verifies that the organizations included in the Application group are a subset of the organizations included in the corresponding consortium and that the ApplicationGroup is set to version 1.
  3. The orderer verifies that if the consortium has members, that the new channel also has application members (creation consortiums and channels with no members is useful for testing only).
  4. The orderer creates a template configuration by taking the Orderer group from the ordering system channel, and creating an Application group with the newly specified members and specifying its mod_policy to be the ChannelCreationPolicy as specified in the consortium config. Note that the policy is evaluated in the context of the new configuration, so a policy requiring ALL members, would require signatures from all the new channel members, not all the members of the consortium.
  5. The orderer then applies the CONFIG_UPDATE as an update to this template configuration. Because the CONFIG_UPDATE applies modifications to the Application group (its version is 1), the config code validates these updates against the ChannelCreationPolicy. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked.
  6. The new CONFIG transaction with the new channel config is wrapped and sent for ordering on the ordering system channel. After ordering, the channel is created.

Endorsement policies

Endorsement policies are used to instruct a peer on how to decide whether a transaction is properly endorsed. When a peer receives a transaction, it invokes the VSCC (Validation System Chaincode) associated with the transaction’s Chaincode as part of the transaction validation flow to determine the validity of the transaction. Recall that a transaction contains one or more endorsement from as many endorsing peers. VSCC is tasked to make the following determinations:

  • all endorsements are valid (i.e. they are valid signatures from valid certificates over the expected message)
  • there is an appropriate number of endorsements
  • endorsements come from the expected source(s)

Endorsement policies are a way of specifying the second and third points.

Endorsement policy syntax in the CLI

In the CLI, a simple language is used to express policies in terms of boolean expressions over principals.

A principal is described in terms of the MSP that is tasked to validate the identity of the signer and of the role that the signer has within that MSP. Four roles are supported: member, admin, client, and peer. Principals are described as MSP.ROLE, where MSP is the MSP ID that is required, and ROLE is one of the four strings member, admin, client and peer. Examples of valid principals are 'Org0.admin' (any administrator of the Org0 MSP) or 'Org1.member' (any member of the Org1 MSP), 'Org1.client' (any client of the Org1 MSP), and 'Org1.peer' (any peer of the Org1 MSP).

The syntax of the language is:

EXPR(E[, E...])

where EXPR is either AND or OR, representing the two boolean expressions and E is either a principal (with the syntax described above) or another nested call to EXPR.

For example:
  • AND('Org1.member', 'Org2.member', 'Org3.member') requests 1 signature from each of the three principals
  • OR('Org1.member', 'Org2.member') requests 1 signature from either one of the two principals
  • OR('Org1.member', AND('Org2.member', 'Org3.member')) requests either one signature from a member of the Org1 MSP or 1 signature from a member of the Org2 MSP and 1 signature from a member of the Org3 MSP.

Specifying endorsement policies for a chaincode

Using this language, a chaincode deployer can request that the endorsements for a chaincode be validated against the specified policy.

注解

if not specified at instantiation time, the endorsement policy defaults to “any member of the organizations in the channel”. For example, a channel with “Org1” and “Org2” would have a default endorsement policy of “OR(‘Org1.member’, ‘Org2.member’)”.

The policy can be specified at instantiate time using the -P switch, followed by the policy.

For example:

peer chaincode instantiate -C <channelid> -n mycc -P "AND('Org1.member', 'Org2.member')"

This command deploys chaincode mycc with the policy AND('Org1.member', 'Org2.member') which would require that a member of both Org1 and Org2 sign the transaction.

Notice that, if the identity classification is enabled (see MSP Documentation), one can use the PEER role to restrict endorsement to only peers.

For example:

peer chaincode instantiate -C <channelid> -n mycc -P "AND('Org1.peer', 'Org2.peer')"

注解

A new organization added to the channel after instantiation can query a chaincode (provided the query has appropriate authorization as defined by channel policies and any application level checks enforced by the chaincode) but will not be able to commit a transaction endorsed by it. The endorsement policy needs to be modified to allow transactions to be committed with endorsements from the new organization (see Upgrade & invoke).

Error handling

错误处理

General Overview

总体概览

Hyperledger Fabric code should use the vendored package github.com/pkg/errors in place of the standard error type provided by Go. This package allows easy generation and display of stack traces with error messages.

Hyperledger Fabric代码可以使用第三方的包 github.com/pkg/errors 来GO语言提供的标准错误类型。 这个包可以简单的生成和展示堆栈追踪中的错误。

Usage Instructions

使用说明

github.com/pkg/errors should be used in place of all calls to fmt.Errorf() or errors.New(). Using this package will generate a call stack that will be appended to the error message.

github.com/pkg/errors 可以在所有的调用中替换 fmt.Errorf() 或者 errors.New() 。使用这个包可以生成一个调用栈附加在错误信息后面。

Using this package is simple and will only require easy tweaks to your code.

使用这个包很简单并且只需要对代码进行简单的微调。

First, you’ll need to import github.com/pkg/errors.

首先,你要引入 github.com/pkg/errors

Next, update all errors that are generated by your code to use one of the error creation functions (errors.New(), errors.Errorf(), errors.WithMessage(), errors.Wrap(), errors.Wrapf().

接下来,用错误产生函数 (errors.New(), errors.Errorf(), errors.WithMessage(), errors.Wrap(), errors.Wrapf())来更新所有从你的代码中产生的错误。

注解

See https://godoc.org/github.com/pkg/errors for complete documentation

of the available error creation function. Also, refer to the General guidelines section below for more specific guidelines for using the package for Fabric code.

注解

参考 https://godoc.org/github.com/pkg/errors 完整的文档中可得到的错误生成函数。并且,参考后面的常规指南章节中更多在Fabric中使用这个包的具体的指南。

Finally, change the formatting directive for any logger or fmt.Printf() calls from %s to %+v to print the call stack along with the error message.

最后,为logger或者fmt.Printf() 修改格式命令,调用``%s`` 为 %+v 在错误信息后打印调用栈。

General guidelines for error handling in Hyperledger Fabric

Hyperledger Fabric中错误处理的一般指南

  • If you are servicing a user request, you should log the error and return it.
  • If the error comes from an external source, such as a Go library or vendored package, wrap the error using errors.Wrap() to generate a call stack for the error.
  • If the error comes from another Fabric function, add further context, if desired, to the error message using errors.WithMessage() while leaving the call stack unaffected.
  • A panic should not be allowed to propagate to other packages.
  • 如果你是服务于一个用户请求,你应该记录错误并返回它
  • 如果错误来自外部源,比如一个Go库或者第三方的包,用errors.Wrap()来封装错误来为错误产生一个调用栈
  • 如果错误来自另外的Fabric函数,如果需要,用errors.WithMessage()对错误信息增加进一步的上下文来,这个函数对调用栈并不产生影响
  • 一个非常严重不可恢复的错误不应该允许传播到其他包

Example program

示例程序

The following example program provides a clear demonstration of using the package:

下面的例子提供了一个很清晰的使用包 github.com/pkg/errors 的示例:

package main

import (
  "fmt"

  "github.com/pkg/errors"
)

func wrapWithStack() error {
  err := createError()
  // do this when error comes from external source (go lib or vendor)
  return errors.Wrap(err, "wrapping an error with stack")
}
func wrapWithoutStack() error {
  err := createError()
  // do this when error comes from internal Fabric since it already has stack trace
  return errors.WithMessage(err, "wrapping an error without stack")
}
func createError() error {
  return errors.New("original error")
}

func main() {
  err := createError()
  fmt.Printf("print error without stack: %s\n\n", err)
  fmt.Printf("print error with stack: %+v\n\n", err)
  err = wrapWithoutStack()
  fmt.Printf("%+v\n\n", err)
  err = wrapWithStack()
  fmt.Printf("%+v\n\n", err)
}

Logging Control

日志控制

Overview

概述

Logging in the peer application and in the shim interface to chaincodes is programmed using facilities provided by the github.com/op/go-logging package. This package supports

日志功能在节点的应用程序和链上代码的shim接口中使用,最终在github.com/op/go-logging包实现。这个包支持:

  • Logging control based on the severity of the message
  • Logging control based on the software module generating the message
  • Different pretty-printing options based on the severity of the message
  • 基于消息的严重程度进行日志控制
  • 基于软件模块产生的消息进行日志控制
  • 基于消息的严重程度美观的打印到不同的格式的选项

All logs are currently directed to stderr, and the pretty-printing is currently fixed. However global and module-level control of logging by severity is provided for both users and developers. There are currently no formalized rules for the types of information provided at each severity level, however when submitting bug reports the developers may want to see full logs down to the DEBUG level.

所有日志目前都被定向到 stderr,而pretty-printing目前是固定的。然而,为用户和开发人员提供了严格级别的全局和模块级别的日志记录控制。目前没有关于每个严重性级别提供的信息类型的正式规则,但是当提交错误报告时,开发人员可能希望看到完整的到DEBUG级别的日志记录。

In pretty-printed logs the logging level is indicated both by color and by a 4-character code, e.g, “ERRO” for ERROR, “DEBU” for DEBUG, etc. In the logging context a module is an arbitrary name (string) given by developers to groups of related messages. In the pretty-printed example below, the logging modules “peer”, “rest” and “main” are generating logs.

在pretty-printed的日志中,日志记录级别由颜色和4个字符的代码指示,例如ERROR的“ERRO”,DEBUG的“DEBU”等。在日志上下文中,模块 是指由开发者指定的任意名称(字符串)的相关消息的组。在以下pretty-printed的例子中,日志模块“peer”,“rest”和“main”都产生了日志。

16:47:09.634 [peer] GetLocalAddress -> INFO 033 Auto detected peer address: 9.3.158.178:7051
16:47:09.635 [rest] StartOpenchainRESTServer -> INFO 035 Initializing the REST service...
16:47:09.635 [main] serve -> INFO 036 Starting peer with id=name:"vp1" , network id=dev, address=9.3.158.178:7051, discovery.rootnode=, validator=true

An arbitrary number of logging modules can be created at runtime, therefore there is no “master list” of modules, and logging control constructs can not check whether logging modules actually do or will exist. Also note that the logging module system does not understand hierarchy or wildcarding: You may see module names like “foo/bar” in the code, but the logging system only sees a flat string. It doesn’t understand that “foo/bar” is related to “foo” in any way, or that “foo/*” might indicate all “submodules” of foo.

可以在运行时创建任意数量的日志记录模块,因此没有模块的“主列表”一说,日志控制结构不能检查日志模块是否实际执行或将存在。另请注意,日志记录模块系统不明白层次结构或通配符:您可能会在代码中看到模块名称,如“foo/bar”,但日志记录系统只能看到一个扁平的字符串。它不明白“foo/bar”与“foo”有任何关系,或者“foo/*”可能表示foo的所有“子模块”。

peer

节点

The logging level of the peer command can be controlled from the command line for each invocation using the --logging-level flag, for example

命令的日志等级可以使用 peer 命令行控制,每次调用peer时使用 –logging-level ,例如:

peer node start --logging-level=debug

The default logging level for each individual peer subcommand can also be set in the core.yaml file. For example the key logging.node sets the default level for the node subcommmand. Comments in the file also explain how the logging level can be overridden in various ways by using environment varaibles.

每个单独的 peer 命令的默认日志记录级别也可以在 core.yaml 文件中设置。例如,键 logging.node 用于设置 ``node ``子命令的默认级别。该文中的注释还解释了如何通过使用环境变量以各种方式覆盖日志级别。

Logging severity levels are specified using case-insensitive strings chosen from

使用以下选择的不区分大小写的字符串可以指定日志严重级别:

CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG

The full logging level specification for the peer is of the form

完整日志级别的规格如下格式:

[<module>[,<module>...]=]<level>[:[<module>[,<module>...]=]<level>...]

A logging level by itself is taken as the overall default. Otherwise, overrides for individual or groups of modules can be specified using the

本身的日志级别被视为总体默认值。另外,可以使用以下命令来指定单个或多个模块组的日志等级的覆盖:

<module>[,<module>...]=<level>

syntax. Examples of specifications (valid for all of --logging-level, environment variable and core.yaml settings):

语法。规范示例(适用于所有的 --logging-level ,环境变量和 core.yaml 设置):

info                                       - Set default to INFO
warning:main,db=debug:chaincode=info       - Default WARNING; Override for main,db,chaincode
chaincode=info:main=debug:db=debug:warning - Same as above

Go chaincodes

Go链上代码

The standard mechanism to log within a chaincode application is to integrate with the logging transport exposed to each chaincode instance via the peer. The chaincode shim package provides APIs that allow a chaincode to create and manage logging objects whose logs will be formatted and interleaved consistently with the shim logs.

链上代码应用程序中日志的标准机制是通过peer与暴露于每个链码实例的日志传输进行集成。 链上代码的 shim 包提供了API,允许链码创建和管理日志记录对象,日志对象的日志将被格式化,并与 shim 日志交织在了一起。

As independently executed programs, user-provided chaincodes may technically also produce output on stdout/stderr. While naturally useful for “devmode”, these channels are normally disabled on a production network to mitigate abuse from broken or malicious code. However, it is possible to enable this output even for peer-managed containers (e.g. “netmode”) on a per-peer basis via the CORE_VM_DOCKER_ATTACHSTDOUT=true configuration option.

作为独立执行的程序,用户提供的链码在技术上也可以在stdout / stderr上产生输出。虽然对“开发模式”有用,但这种方式通常在生产环境上被禁用,以减轻破坏或恶意代码的滥用。然而,甚至可以通过CORE_VM_DOCKER_ATTACHSTDOUT = true配置选项在每个peer-peer的基础上为peer管理的容器(例如“netmode”)启用此输出。

Once enabled, each chaincode will receive its own logging channel keyed by its container-id. Any output written to either stdout or stderr will be integrated with the peer’s log on a per-line basis. It is not recommended to enable this for production.

一旦启用,每个链码将接收其自己的日志通道,其由container-id标识。写入stdout或stderr的任何输出将与peer的日志按照每行进行集成。不建议将其用于生产。

API

NewLogger(name string) *ChaincodeLogger - Create a logging object for use by a chaincode

NewLogger(name string) *ChaincodeLogger - 创建一个链码中使用的日志实体

(c *ChaincodeLogger) SetLevel(level LoggingLevel) - Set the logging level of the logger

(c *ChaincodeLogger) SetLevel(level LoggingLevel) - 设置日志等级

(c *ChaincodeLogger) IsEnabledFor(level LoggingLevel) bool - Return true if logs will be generated at the given level

(c *ChaincodeLogger) IsEnabledFor(level LoggingLevel) bool - 如果日志可以在给定的级别上生成则返回true

LogLevel(levelString string) (LoggingLevel, error) - Convert a string to a LoggingLevel

LogLevel(levelString string) (LoggingLevel, error) - 转变一个字符串为一个 LoggingLevel

A LoggingLevel is a member of the enumeration

一个 LoggingLevel 是枚举中的一个成员

LogDebug, LogInfo, LogNotice, LogWarning, LogError, LogCritical

which can be used directly, or generated by passing a case-insensitive version of the strings

可以直接使用,或者通过传入一个大小写敏感的字符串

DEBUG, INFO, NOTICE, WARNING, ERROR, CRITICAL

to the LogLevel API.

Formatted logging at various severity levels is provided by the functions

以下函数提供了各种严重级别的格式化日志记录

(c *ChaincodeLogger) Debug(args ...interface{})
(c *ChaincodeLogger) Info(args ...interface{})
(c *ChaincodeLogger) Notice(args ...interface{})
(c *ChaincodeLogger) Warning(args ...interface{})
(c *ChaincodeLogger) Error(args ...interface{})
(c *ChaincodeLogger) Critical(args ...interface{})

(c *ChaincodeLogger) Debugf(format string, args ...interface{})
(c *ChaincodeLogger) Infof(format string, args ...interface{})
(c *ChaincodeLogger) Noticef(format string, args ...interface{})
(c *ChaincodeLogger) Warningf(format string, args ...interface{})
(c *ChaincodeLogger) Errorf(format string, args ...interface{})
(c *ChaincodeLogger) Criticalf(format string, args ...interface{})

The f forms of the logging APIs provide for precise control over the formatting of the logs. The non-f forms of the APIs currently insert a space between the printed representations of the arguments, and arbitrarily choose the formats to use.

日志API的 f 形式可以精确控制日志格式。 API的非 f 形式当前在参数的打印表示之间插入一个空格,并任意选择要使用的格式。

In the current implementation, the logs produced by the shim and a ChaincodeLogger are timestamped, marked with the logger name and severity level, and written to stderr. Note that logging level control is currently based on the name provided when the ChaincodeLogger is created. To avoid ambiguities, all ChaincodeLogger should be given unique names other than “shim”. The logger name will appear in all log messages created by the logger. The shim logs as “shim”.

在当前实现中,由shim和``ChaincodeLogger`` 生成的日志是时间戳的,标有记录器名称和严重性级别,并写入 stderr。请注意,日志级别控制当前基于创建ChaincodeLogger时提供的名称。为了避免歧义,所有``ChaincodeLogger`` 应该被赋予除“shim”之外的唯一名称。记录器 名称 将显示在由记录器创建的所有日志消息中。垫片记录为“shim”。

Go language chaincodes can also control the logging level of the chaincode shim interface through the SetLoggingLevel API.

Go语言链接代码还可以通过 SetLoggingLevel API来控制链码 shim 界面的日志记录级别。

SetLoggingLevel(LoggingLevel level) - Control the logging level of the shim

SetLoggingLevel(LoggingLevel level) - 控制shim的日志记录级别

The default logging level for the shim is LogDebug.

shim的默认日志级别为 LogDebug

Below is a simple example of how a chaincode might create a private logging object logging at the LogInfo level, and also control the amount of logging provided by the shim based on an environment variable.

下面是一个简单的示例,说明链码如何创建 LogInfo 级别的专用日志对象日志记录,并且还可以基于环境变量来控制由 shim 提供的日志量。

var logger = shim.NewLogger("myChaincode")

func main() {

    logger.SetLevel(shim.LogInfo)

    logLevel, _ := shim.LogLevel(os.Getenv("SHIM_LOGGING_LEVEL"))
    shim.SetLoggingLevel(logLevel)
    ...
}

Securing Communication With Transport Layer Security (TLS)

Fabric supports for secure communication between nodes using TLS. TLS communication can use both one-way (server only) and two-way (server and client) authentication.

Configuring TLS for peers nodes

A peer node is both a TLS server and a TLS client. It is the former when another peer node, application, or the CLI makes a connection to it and the latter when it makes a connection to another peer node or orderer.

To enable TLS on a peer node set the following peer configuration properties:

  • peer.tls.enabled = true
  • peer.tls.cert.file = fully qualified path of the file that contains the TLS server certificate
  • peer.tls.key.file = fully qualified path of the file that contains the TLS server private key
  • peer.tls.rootcert.file = fully qualified path of the file that contains the certificate chain of the certificate authority(CA) that issued TLS server certificate

By default, TLS client authentication is turned off when TLS is enabled on a peer node. This means that the peer node will not verify the certificate of a client (another peer node, application, or the CLI) during a TLS handshake. To enable TLS client authentication on a peer node, set the peer configuration property peer.tls.clientAuthRequired to true and set the peer.tls.clientRootCAs.files property to the CA chain file(s) that contain(s) the CA certificate chain(s) that issued TLS certificates for your organization’s clients.

By default, a peer node will use the same certificate and private key pair when acting as a TLS server and client. To use a different certificate and private key pair for the client side, set the peer.tls.clientCert.file and peer.tls.clientKey.file configuration properties to the fully qualified path of the client certificate and key file, respectively.

TLS with client authentication can also be enabled by setting the following environment variables:

  • CORE_PEER_TLS_ENABLED = true
  • CORE_PEER_TLS_CERT_FILE = fully qualified path of the server certificate
  • CORE_PEER_TLS_KEY_FILE = fully qualified path of the server private key
  • CORE_PEER_TLS_ROOTCERT_FILE = fully qualified path of the CA chain file
  • CORE_PEER_TLS_CLIENTAUTHREQUIRED = true
  • CORE_PEER_TLS_CLIENTROOTCAS_FILES = fully qualified path of the CA chain file
  • CORE_PEER_TLS_CLIENTCERT_FILE = fully qualified path of the client certificate
  • CORE_PEER_TLS_CLIENTKEY_FILE = fully qualified path of the client key

When client authentication is enabled on a peer node, a client is required to send its certificate during a TLS handshake. If the client does not send its certificate, the handshake will fail and the peer will close the connection.

When a peer joins a channel, root CA certificate chains of the channel members are read from the config block of the channel and are added to the TLS client and server root CAs data structure. So, peer to peer communication, peer to orderer communication should work seamlessly.

Configuring TLS for orderer nodes

To enable TLS on an orderer node, set the following orderer configuration properties:

  • General.TLS.Enabled = true
  • General.TLS.PrivateKey = fully qualified path of the file that contains the server private key
  • General.TLS.Certificate = fully qualified path of the file that contains the server certificate
  • General.TLS.RootCAs = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate

By default, TLS client authentication is turned off on orderer, as is the case with peer. To enable TLS client authentication, set the following config properties:

  • General.TLS.ClientAuthRequired = true
  • General.TLS.ClientRootCAs = fully qualified path of the file that contains the certificate chain of the CA that issued the TLS server certificate

TLS with client authentication can also be enabled by setting the following environment variables:

  • ORDERER_GENERAL_TLS_ENABLED = true
  • ORDERER_GENERAL_TLS_PRIVATEKEY = fully qualified path of the file that contains the server private key
  • ORDERER_GENERAL_TLS_CERTIFICATE = fully qualified path of the file that contains the server certificate
  • ORDERER_GENERAL_TLS_ROOTCAS = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate
  • ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED = true
  • ORDERER_GENERAL_TLS_CLIENTROOTCAS = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate

Configuring TLS for the peer CLI

The following environment variables must be set when running peer CLI commands against a TLS enabled peer node:

  • CORE_PEER_TLS_ENABLED = true
  • CORE_PEER_TLS_ROOTCERT_FILE = fully qualified path of the file that contains cert chain of the CA that issued the TLS server cert

If TLS client authentication is also enabled on the remote server, the following variables must to be set in addition to those above:

  • CORE_PEER_TLS_CLIENTAUTHREQUIRED = true
  • CORE_PEER_TLS_CLIENTCERT_FILE = fully qualified path of the client certificate
  • CORE_PEER_TLS_CLIENTKEY_FILE = fully qualified path of the client private key

When running a command that connects to orderer service, like peer channel <create|update|fetch> or peer chaincode <invoke|instantiate>, following command line arguments must also be specified if TLS is enabled on the orderer:

  • –tls
  • –cafile <fully qualified path of the file that contains cert chain of the orderer CA>

If TLS client authentication is enabled on the orderer, the following arguments must be specified as well:

  • –clientauth
  • –keyfile <fully qualified path of the file that contains the client private key>
  • –certfile <fully qualified path of the file that contains the client certificate>

Debugging TLS issues

Before debugging TLS issues, it is advisable to enable GRPC debug on both the TLS client and the server side to get additional information. To enable GRPC debug, set the environment variable CORE_LOGGING_GRPC to DEBUG.

If you see the error message remote error: tls: bad certificate on the client side, it usually means that the TLS server has enabled client authentication and the server either did not receive the correct client certificate or it received a client certificate that it does not trust. Make sure the client is sending its certificate and that it has been signed by one of the CA certificates trusted by the peer or orderer node.

If you see the error message remote error: tls: bad certificate in your chaincode logs, ensure that your chaincode has been built using the chaincode shim provided with Fabric v1.1 or newer. If your chaincode does not contain a vendored copy of the shim, deleting the chaincode container and restarting its peer will rebuild the chaincode container using the current shim version. If your chaincode vendored a previous version of the shim, review the documentation on how to upgrade-vendored-shim.

启动基于kafka的排序服务(Bringing up a Kafka-based Ordering Service)

须知(Caveat emptor)

This document assumes that the reader generally knows how to set up a Kafka cluster and a ZooKeeper ensemble. The purpose of this guide is to identify the steps you need to take so as to have a set of Hyperledger Fabric ordering service nodes (OSNs) use your Kafka cluster and provide an ordering service to your blockchain network.

该文档假设读者已经基本了解如何去搭建Kafka集群和ZooKeeper集群。本文档的目的是跟您确认有关使用Kafka集群搭建一套为你的区块链网络提供排序服务的Hyperledger Fabric排序服务节点集(OSNs)所需要采取的步骤。。

概览(Big picture)

Each channel maps to a separate single-partition topic in Kafka. When an OSN receives transactions via the Broadcast RPC, it checks to make sure that the broadcasting client has permissions to write on the channel, then relays (i.e. produces) those transactions to the appropriate partition in Kafka. This partition is also consumed by the OSN which groups the received transactions into blocks locally, persists them in its local ledger, and serves them to receiving clients via the Deliver RPC. For low-level details, refer to the document that describes how we came to this design — Figure 8 is a schematic representation of the process described above.

每一个通道(channel)在Kafka中被映射到一个单独的单分区(partition)类别(topic)。当排序节点接收到客户端通过RPC广播(Broadcast)出来的交易时,它会检查广播交易的客户端是否有权限去修改通道(channel)数据,然后反馈(即产生)这些交易到Kafka的适当分区(partition)中。该分区也被排序节点所消费(consume)(译者注:生产者消费者模型),排序节点将接收到的交易在本地分组后打包进区块,将其持久化在本地账本中,并通过Deliver RPC提供给需要接收的客户端。更多详细的信息,请参考the document that describes how we came to this design <https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit>_ – 图8是上述过程的示意图。

步骤(Steps)

Let K and Z be the number of nodes in the Kafka cluster and the ZooKeeper ensemble respectively:

设定变量 K 和 Z 分别是Kafka集群和ZooKeeper集群的节点数量:

  1. At a minimum, K should be set to 4. (As we will explain in Step 4 below, this is the minimum number of nodes necessary in order to exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1 broker go down, all channels will continue to be writeable and readable, and new channels can be created.)

  2. Z will either be 3, 5, or 7. It has to be an odd number to avoid split-brain scenarios, and larger than 1 in order to avoid single point of failures. Anything beyond 7 ZooKeeper servers is considered an overkill.

    设K的最小值需要是4。(我们将在步骤4中解释,这是实现 故障容错(crash fault tolerance) 所需要的最小数值,也就是说, 4个节点可以容许1个节点宕机,所有的通道能够继续读写且可以创建通道。)(译者:Kafka节点被称为broker) Z可以是3、5或者7。它必须是一个奇数来避免分裂(split-brain)情景,大于1以避免单点故障。 超过7个ZooKeeper服务器则被认为是多余的。

Then proceed as follows:

请按照以下步骤进行:

  1. Orderers: Encode the Kafka-related information in the network’s genesis block. If you are using configtxgen, edit configtx.yaml —or pick a preset profile for the system channel’s genesis block— so that:

    Orderers: Kafka 相关信息被写在网络的初始区块中. 如果你使用 configtxgen 工具, 编辑 configtx.yaml 文件– 或者挑一个现成的系统通道的初始区块配置文件 – 其中:

    1. Orderer.OrdererType is set to kafka.

    2. Orderer.Kafka.Brokers contains the address of at least two of the Kafka brokers in your cluster in IP:port notation. The list does not need to be exhaustive. (These are your bootstrap brokers.)

      Orderer.OrdererType 字段被设置为 kafka. Orderer.Kafka.Brokers 字段包含*至少两个* Kafka集群中的节点 IP:port 样式的地址。这个列表没有必要详尽无遗(这些是你的引导 brokers.)

  2. Orderers: Set the maximum block size. Each block will have at most Orderer.AbsoluteMaxBytes bytes (not including headers), a value that you can set in configtx.yaml. Let the value you pick here be A and make note of it — it will affect how you configure your Kafka brokers in Step 6.

  3. Orderers: Create the genesis block. Use configtxgen. The settings you picked in Steps 3 and 4 above are system-wide settings, i.e. they apply across the network for all the OSNs. Make note of the genesis block’s location.

  4. Kafka cluster: Configure your Kafka brokers appropriately. Ensure that every Kafka broker has these keys configured:

    Orderers: 设置区块最大容量. 每一个区块最多只能有 Orderer.AbsoluteMaxBytes 字节的容量(不含区块头信息), 这是一个你可以修改的值,存放在 configtx.yaml 配置文件中. 假设此处你设置的数值为``A``,将此数字记下来 – 这会影响你在步骤6中对于Kafka brokers 的配置. Orderers: 使用 configtxgen 工具 创建初始区块. 在步骤3和4中的设置是全局的设置, 也就是说这些设置的生效范围是网络中所有的排序节点. 记录下初始区块的位置. Kafka 集群: 适当配置你的Kafka集群. 确保每一个Kafka节点都配置了以下的值:

    1. unclean.leader.election.enable = false — Data consistency is key in a blockchain environment. We cannot have a channel leader chosen outside of the in-sync replica set, or we run the risk of overwriting the offsets that the previous leader produced, and —as a result— rewrite the blockchain that the orderers produce.

    2. min.insync.replicas = M — Where you pick a value M such that 1 < M < N (see default.replication.factor below). Data is considered committed when it is written to at least M replicas (which are then considered in-sync and belong to the in-sync replica set, or ISR). In any other case, the write operation returns an error. Then:

      unclean.leader.election.enable = false – 数据一致性是区块链环境的关键. 我们不能选择不在同步副本集中的channel leader, 也不能冒风险去覆盖前一leader所产生的偏移量, 那样的结果就是重写orderers所产生的区块链数据. min.insync.replicas = MM 的值需要满足 1 < M < N (N的值参考后面的 default.replication.factor). 数据被认为是完成提交当它被写入到至少 M 个副本中(也就是说它被认为是同步的,然后被写入到同步副本集中,也成为ISR). 其他情况, 写入操作返回错误信息. 然后:

      1. If up to N-M replicas —out of the N that the channel data is written to— become unavailable, operations proceed normally.

      2. If more replicas become unavailable, Kafka cannot maintain an ISR set of M, so it stops accepting writes. Reads work without issues. The channel becomes writeable again when M replicas get in-sync.

        如果有 N-M 个副本不可访问, 操作将正常进行. 如果更多副本不可访问, Kafka 不能位置数量 M 的同步副本集(ISR), 所以它会停止接受写入操作. 读操作可以正常运行. 当``M``个副本重新同步后,通道就可以再次变为可写入状态.

    3. default.replication.factor = N — Where you pick a value N such that N < K. A replication factor of N means that each channel will have its data replicated to N brokers. These are the candidates for the ISR set of a channel. As we noted in the min.insync.replicas section above, not all of these brokers have to be available all the time. N should be set strictly smaller to K because channel creations cannot go forward if less than N brokers are up. So if you set N = K, a single broker going down means that no new channels can be created on the blockchain network — the crash fault tolerance of the ordering service is non-existent.

      default.replication.factor = N – 选择一个 N 的数值满足 N < K (Kafak集群数量). 参数 N 表示每个channel 的数据会复制到 N 个 broker 中. 这些是 channel 同步副本集的候选. 正如前面 min.insync.replicas 部分所说的, 不是所有broker都需要是随时可用的. N 值需要设置为绝对小于 K , 因为channel的创建需要不少于 N 个broker是启动的. 所以如果设置 N = K , 一个 broker 宕机就意味着区块链网络不能再创建channel. 那么故障容错的排序服务也就不存在了.

      Based on what we’ve described above, the minimum allowed values for M and N are 2 and 3 respectively. This configuration allows for the creation of new channels to go forward, and for all channels to continue to be writeable.

    4. message.max.bytes and replica.fetch.max.bytes should be set to a value larger than A, the value you picked in Orderer.AbsoluteMaxBytes in Step 4 above. Add some buffer to account for headers — 1 MiB is more than enough. The following condition applies:

      基于我们上述的描述,MN 允许的最小值分别是2和3,这种配置使得继续创建新通道,以及让所有通道可写入。 message.max.bytesreplica.fetch.max.bytes 的值需要大于 A, 就是在步骤4中选取的 Orderer.AbsoluteMaxBytes 的值. 再为区块头增加一些余量 – 1 MiB 就足够了. 需要满足以下条件:

      Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      

      (For completeness, we note that message.max.bytes should be strictly smaller to socket.request.max.bytes which is set by default to 100 MiB. If you wish to have blocks larger than 100 MiB you will need to edit the hard-coded value in brokerConfig.Producer.MaxMessageBytes in fabric/orderer/kafka/config.go and rebuild the binary from source. This is not advisable.)

      (补充, 我们注意到 message.max.bytes 需要严格小于 socket.request.max.bytes , 这个值默认是100Mib. 如果你希望区块大于100MiB, 你需要去修改硬代码中的变量 brokerConfig.Producer.MaxMessageBytes , 代码位置是 fabric/orderer/kafka/config.go , 再重新编译代码, 不建议这么做.)

    5. log.retention.ms = -1. Until the ordering service adds support for pruning of the Kafka logs, you should disable time-based retention and prevent segments from expiring. (Size-based retention —see log.retention.bytes— is disabled by default in Kafka at the time of this writing, so there’s no need to set it explicitly.)

      log.retention.ms = -1. 直到排序服务增加了对于 Kafka 日志分割(pruning)的支持之前, 应该禁用基于时间分割的方式以避免单个日志文件到期分段. (基于文件大小的分割方式 – 看参数 log.retention.bytes – 在本文书写时, 在 Kafka 中是默认被禁用的, 所以这个值没有必要指定地很明确. )

  5. Orderers: Point each OSN to the genesis block. Edit General.GenesisFile in orderer.yaml so that it points to the genesis block created in Step 5 above. (While at it, ensure all other keys in that YAML file are set appropriately.)

  6. Orderers: Adjust polling intervals and timeouts. (Optional step.)

    Orderers: 将所有排序节点指向初始区块. 编辑 orderer.yaml ``文件中的参数 ``General.GenesisFile 使其指向步骤3中所创建的初始区块. (同时, 确保YAML文件中所有其他参数都是正确的.) Orderers: **调整轮询间隔和超时时间. **(可选步骤.)

    1. The Kafka.Retry section in the orderer.yaml file allows you to adjust the frequency of the metadata/producer/consumer requests, as well as the socket timeouts. (These are all settings you would expect to see in a Kafka producer or consumer.)

    2. Additionally, when a new channel is created, or when an existing channel is reloaded (in case of a just-restarted orderer), the orderer interacts with the Kafka cluster in the following ways:

      orderer.yaml 文件中的 Kafka.Retry 区域让你能够调整 metadata/producer/consumer 请求的频率以及socket的超时时间. (这些应该就是所有在 kafka 的生产者和消费者 中你需要的设置) 另外, 当一个 channel 被创建, 或当一个现有的 channel 被重新读取(刚启动 orderer 的情况), orderer 通过以下方式和 Kafka 集群进行交互.

      1. It creates a Kafka producer (writer) for the Kafka partition that corresponds to the channel.

      2. It uses that producer to post a no-op CONNECT message to that partition.

      3. It creates a Kafka consumer (reader) for that partition.

        为 channel 对应的 Kafka 分区 创建一个 Kafka 生产者. 通过生产者向这个分区发一个空的 ``CONNECT``信息. 为这个分区创建一个 Kafka 消费者.

      If any of these steps fail, you can adjust the frequency with which they are repeated. Specifically they will be re-attempted every Kafka.Retry.ShortInterval for a total of Kafka.Retry.ShortTotal, and then every Kafka.Retry.LongInterval for a total of Kafka.Retry.LongTotal until they succeed. Note that the orderer will be unable to write to or read from a channel until all of the steps above have been completed successfully. 如果任意步骤出错, 你可以调整其重复的频率.这些步骤会在每一个 Kafka.Retry.ShortInterval 指定的时间间隔后进行重试 Kafka.Retry.ShortTotal 次,再以 Kafka.Retry.LongInterval 规定的时间间隔重试 Kafka.Retry.LongTotal 次直到成功.需要注意的是 orderer 不能读写该 channel 的数据直到所有上述步骤都成功执行.

  7. Set up the OSNs and Kafka cluster so that they communicate over SSL. (Optional step, but highly recommended.) Refer to the Confluent guide for the Kafka cluster side of the equation, and set the keys under Kafka.TLS in orderer.yaml on every OSN accordingly.

  8. Bring up the nodes in the following order: ZooKeeper ensemble, Kafka cluster, ordering service nodes.

    将排序节点和 Kafka 集群间设置为通过 SSL 通讯. (可选步骤,强烈推荐) 参考 the Confluent guide 文档中关于 Kafka 集群的设置, 来设置每个排序节点 orderer.yaml 文件中 Kafka.TLS 部分的内容. 启动节点请按照以下顺序: ZooKeeper 集群, Kafka 集群, 排序节点

其他注意事项(Additional considerations)

  1. Preferred message size. In Step 4 above (see `Steps`_ section) you can also set the preferred size of blocks by setting the Orderer.Batchsize.PreferredMaxBytes key. Kafka offers higher throughput when dealing with relatively small messages; aim for a value no bigger than 1 MiB.

  2. Using environment variables to override settings. When using the sample Kafka and Zookeeper Docker images provided with Fabric (see images/kafka and images/zookeeper respectively), you can override a Kafka broker or a ZooKeeper server’s settings by using environment variables. Replace the dots of the configuration key with underscores — e.g. KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false will allow you to override the default value of unclean.leader.election.enable. The same applies to the OSNs for their local configuration, i.e. what can be set in orderer.yaml. For example ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s allows you to override the default value for Orderer.Kafka.Retry.ShortInterval.

    首选的消息大小. 在上面的步骤4中, 你也能通过参数 Orderer.Batchsize.PreferredMaxBytes 设置首选的区块大小. Kafka 处理相对较小的信息有更高的吞吐量; 针对小于 1 MiB 大小的值. 使用环境变量重写设置. 当使用Fabric提供的Kafka和Zookeeper的Docker镜像样例时(分别查看 images/kafkaimages/zookeeper ),你能够通过设置环境变量来重写 Kafka 节点和 Zookeeper 服务器的设置. 替换配置参数中的 点 为 下划线 – 例如 KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false 环境变量重写配置参数 unclean.leader.election.enable. 环境变量重写同样适用于排序节点的本地配置, 即 orderer.yaml 中所能设置的. 例如 ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s 环境变量可以重写本地配置文件中的 Orderer.Kafka.Retry.ShortInterval.

支持的 Kafka 版本和升级(Kafka Protocol Version Compatibility)

Fabric uses the sarama client library and vendors a version of it that supports Kafka 0.10 to 1.0, yet is still known to work with older versions.

Fabric 使用代码库: ``sarama client library <https://github.com/Shopify/sarama>``_ 支持的 Kafka 版本是 0.10 到 1.0,并且在更老的版本上依然可以工作。

Using the Kafka.Version key in orderer.yaml, you can configure which version of the Kafka protocol is used to communicate with the Kafka cluster’s brokers. Kafka brokers are backward compatible with older protocol versions. Because of a Kafka broker’s backward compatibility with older protocol versions, upgrading your Kafka brokers to a new version does not require an update of the Kafka.Version key value, but the Kafka cluster might suffer a performance penalty while using an older protocol version.

使用 orderer.yaml 中的 Kafka.Version ,你可以配置Kafka 集群节点用于交流的 Kafka协议的版本。Kafka brokers向前兼容更老的协议版本。 由于Kafka brokers向前兼容更老的协议版本,升级你的Kafka brokers到一个新的版本不需要更新 Kafka.Version 的值,但是Kafka集群在使用一个更老的版本时可能需要忍受 performance penalty

调试(Debugging)

Set General.LogLevel to DEBUG and Kafka.Verbose in orderer.yaml to true.

设置 orderer.yaml 文件中 General.LogLevelDEBUGKafka.Verbosetrue.

例子(Example)

Sample Docker Compose configuration files inline with the recommended settings above can be found under the fabric/bddtests directory. Look for dc-orderer-kafka-base.yml and dc-orderer-kafka.yml.

包含了推荐的设置的Docker Compose 配置文件示例能够在 fabric/bddtests 目录中找到. 包括 dc-orderer-kafka-base.yml 文件和 dc-orderer-kafka.yml 文件.

Commands Reference

peer 命令

Description 描述

The peer command has five different subcommands, each of which allows administrators to perform a specific set of tasks related to a peer. For example, you can use the peer channel subcommand to join a peer to a channel, or the peer chaincode command to deploy a smart contract chaincode to a peer.

peer 命令有五个子命令,每一个都可以让管理员执行与peer节点相关的特定任务集。例如, 你可以使用 peer channel 子命令来把peer加入channel,或者使用peer chaincode 命令在peer节点上部署智能合约chaincode。

Syntax 语法

The peer command has five different subcommands within it:

peer 命令内部包含如下五个不同的子命令:

peer chaincode [option] [flags]
peer channel   [option] [flags]
peer logging   [option] [flags]
peer node      [option] [flags]
peer version   [option] [flags]

Each subcommand has different options available, and these are described in their own dedicated topic. For brevity, we often refer to a command (peer), a subcommand (channel), or subcommand option (fetch) simply as a command.

每一个子命令各自有不同的可用参数选项,这些选项由它们专用的主题描述。为简便起见,我们经常提到 (peer)来代指一个命令,或者(channel)就指一个子命令,或者只提到子命令的选项如(fetch)就 是指一个命令.

If a subcommand is specified without an option, then it will return some high level help text as described in the --help flag below.

如果一个子命令没有指定选项,它会返回一些上层help文本信息,就像加上--help 标签一样的效果。

Flags

Each peer subcommand has a specific set of flags associated with it, many of which are designated global because they can be used in all subcommand options. These flags are described with the relevant peer subcommand.

The top level peer command has the following flags:

  • --help

    Use --help to get brief help text for any peer command. The --help flag is very useful – it can be used to get command help, subcommand help, and even option help.

    For example

    peer --help
    peer channel --help
    peer channel list --help
    

    See individual peer subcommands for more detail.

  • --logging-level <string>

    This flag sets the logging level for a peer when it is started.

    There are six possible values for <string> : debug, info, notice, warning, error, and critical.

    If logging-level is not explicitly specified, then it is taken from the CORE_LOGGING_LEVEL environment variable if it is set. If CORE_LOGGING_LEVEL is not set then the file sampleconfig/core.yaml is used to determined the logging level for the peer.

    You can find the current logging level for a specific component on the peer by running peer logging getlevel <component-name>.

  • --version

    Use this flag to show detailed information about how the peer was built. This flag cannot be applied to peer subcommands or their options.

Usage

Here’s some examples using the different available flags on the peer command.

  • Using the --help flag on the peer channel join command.

    peer channel join --help
    
    Joins the peer to a channel.
    
    Usage:
      peer channel join [flags]
    
    Flags:
      -b, --blockpath string   Path to file containing genesis block
    
    Global Flags:
          --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
          --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
          --clientauth                          Use mutual TLS when communicating with the orderer endpoint
          --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
          --logging-level string                Default logging level and overrides, see core.yaml for full syntax
      -o, --orderer string                      Ordering service endpoint
          --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
          --tls                                 Use TLS when communicating with the orderer endpoint
      -v, --version                             Display current version of fabric peer server
    

    This shows brief help syntax for the peer channel join command.

  • Using the --version flag on the peer command.

    peer --version
    
    peer:
     Version: 1.1.0-alpha
     Go version: go1.9.2
     OS/Arch: linux/amd64
     Experimental features: false
     Chaincode:
      Base Image Version: 0.4.5
      Base Docker Namespace: hyperledger
      Base Docker Label: org.hyperledger.fabric
      Docker Namespace: hyperledger
    

    This shows that this peer was built using an alpha of Hyperledger Fabric version 1.1.0, compiled with GOLANG 1.9.2. It can be used on Linux operating systems with AMD64 compatible instruction sets.

peer chaincode

Description

The peer chaincode subcommand allows administrators to perform chaincode related operations on a peer, such as installing, instantiating, invoking, packaging, querying, and upgrading chaincode.

Syntax

The peer chaincode subcommand has the following syntax:

peer chaincode install      [flags]
peer chaincode instantiate  [flags]
peer chaincode invoke       [flags]
peer chaincode list         [flags]
peer chaincode package      [flags]
peer chaincode query        [flags]
peer chaincode signpackage  [flags]
peer chaincode upgrade      [flags]

The different subcommand options (install, instantiate...) relate to the different chaincode operations that are relevant to a peer. For example, use the peer chaincode install subcommand option to install a chaincode on a peer, or the peer chaincode query subcommand option to query a chaincode for the current value on a peer’s ledger.

Each peer chaincode subcommand is described together with its options in its own section in this topic.

Flags

Each peer chaincode subcommand has both a set of flags specific to an individual subcommand, as well as a set of global flags that relate to all peer chaincode subcommands. Not all subcommands would use these flags. For instance, the query subcommand does not need the --orderer flag.

The individual flags are described with the relevant subcommand. The global flags are

  • --cafile <string>

    Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint

  • --certfile <string>

    Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint

  • --keyfile <string>

    Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint

  • -o or --orderer <string>

    Ordering service endpoint specifed as <hostname or IP address>:<port>

  • --ordererTLSHostnameOverride <string>

    The hostname override to use when validating the TLS connection to the orderer

  • --tls

    Use TLS when communicating with the orderer endpoint

  • --transient <string>

    Transient map of arguments in JSON encoding

  • --logging-level <string>

    Default logging level and overrides, see core.yaml for full syntax

peer chaincode install

Install Description

The peer chaincode install command allows administrators to install chaincode onto the filesystem of a peer.

Install Syntax

The peer chaincode install command has the following syntax:

peer chaincode install [flags]

Note: An install can also be performed using a chaincode packaged via the peer chaincode package command (see the peer chaincode package section below for further details on packaging a chaincode for installation). The syntax using a chaincode package is as follows:

peer chaincode install [chaincode-package-file]

where [chaincode-package-file] is the output file from the peer chaincode package command.

Install Flags

The peer chaincode install command has the following command-specific flags:

  • -c, --ctor <string>

    Constructor message for the chaincode in JSON format (default “{}”)

  • -l, --lang <string>

    Language the chaincode is written in (default “golang”)

  • -n, --name <string>

    Name of the chaincode that is being installed. It may consist of alphanumerics, dashes, and underscores

  • -p, --path <string>

    Path to the chaincode that is being installed. For Golang (-l golang) chaincodes, this is the path relative to the GOPATH. For Node.js (-l node) chaincodes, this is either the absolute path or the relative path from where the install command is being performed

  • -v, --version <string>

    Version of the chaincode that is being installed. It may consist of alphanumerics, dashes, underscores, periods, and plus signs

Install Usage

Here are some examples of the peer chaincode install command:

  • To install chaincode named mycc at version 1.0:

    peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
    
    .
    .
    .
    2018-02-22 16:33:52.998 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
    2018-02-22 16:33:52.998 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
    .
    .
    .
    2018-02-22 16:33:53.194 UTC [chaincodeCmd] install -> DEBU 010 Installed remotely response:<status:200 payload:"OK" >
    2018-02-22 16:33:53.194 UTC [main] main -> INFO 011 Exiting.....
    

    Here you can see that the install completed successfully based on the log message:

    2018-02-22 16:33:53.194 UTC [chaincodeCmd] install -> DEBU 010 Installed remotely response:<status:200 payload:"OK" >
    
  • To install chaincode package ccpack.out generated with the package subcommand

    peer chaincode install ccpack.out
    
    .
    .
    .
    2018-02-22 18:18:05.584 UTC [chaincodeCmd] install -> DEBU 005 Installed remotely response:<status:200 payload:"OK" >
    2018-02-22 18:18:05.584 UTC [main] main -> INFO 006 Exiting.....
    

    Here you can see that the install completed successfully based on the log message:

    2018-02-22 18:18:05.584 UTC [chaincodeCmd] install -> DEBU 005 Installed remotely response:<status:200 payload:"OK" >
    

peer chaincode instantiate

Instantiate Description

The peer chaincode instantiate command allows administrators to instantiate chaincode on a channel of which the peer is a member.

Instantiate Syntax

The peer chaincode instantiate command has the following syntax:

peer chaincode instantiate [flags]
Instantiate Flags

The peer chaincode instantiate command has the following command-specific flags:

  • -C, --channelID <string>

    Name of the channel where the chaincode should be instantiated

  • -c, --ctor <string>

    Constructor message for the chaincode in JSON format (default “{}”)

  • -E, --escc <string>

    Name of the endorsement system chaincode to be used for this chaincode (default “escc”)

  • -n, --name <string>

    Name of the chaincode that is being instantiated

  • -P, --policy <string>

    Endorsement policy associated to this chaincode. By default fabric will generate an endorsement policy equivalent to “any member from the organizations currently in the channel”

  • -v, --version <string>

    Version of the chaincode that is being instantiated

  • -V, --vscc <string>

    Name of the verification system chaincode to be used for this chaincode (default “vscc”)

The global peer command flags also apply:

  • --cafile <string>
  • --certfile <string>
  • --keyfile <string>
  • -o, --orderer <string>
  • --ordererTLSHostnameOverride <string>
  • --tls
  • --transient <string>
If `--orderer` flag is not specified, the command will attempt to retrieve
the orderer information for the channel from the peer before issuing the
instantiate command.
Instantiate Usage

Here are some examples of the peer chaincode instantiate command, which instantiates the chaincode named mycc at version 1.0 on channel mychannel:

  • Using the --tls and --cafile global flags to instantiate the chaincode in a network with TLS enabled:

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer')"
    
    2018-02-22 16:33:53.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:33:53.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    2018-02-22 16:34:08.698 UTC [main] main -> INFO 003 Exiting.....
    
  • Using only the command-specific options to instantiate the chaincode in a network with TLS disabled:

    peer chaincode instantiate -o orderer.example.com:7050 -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR    ('Org1MSP.peer','Org2MSP.peer')"
    
    
    2018-02-22 16:34:09.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:34:09.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    2018-02-22 16:34:24.698 UTC [main] main -> INFO 003 Exiting.....
    

peer chaincode invoke

Invoke Description

The peer chaincode invoke command allows administrators to call chaincode functions on a peer using the supplied arguments. The CLI invokes chaincode by sending a transaction proposal to a peer. The peer will execute the chaincode and send the endorsed proposal response (or error) to the CLI. On receipt of a endorsed proposal response, the CLI will construct a transaction with it and send it to the orderer.

Invoke Syntax

The peer chaincode invoke command has the following syntax:

peer chaincode invoke [flags]
Invoke Flags

The peer chaincode invoke command has the following command-specific flags:

  • -C, --channelID <string>

    Name of the chaincode that is being invoked

  • -c, --ctor <string>

    Constructor message for the chaincode in JSON format (default “{}”)

  • -n, --name <string>

    Name of the chaincode that is being invoked

The global peer command flags also apply:

  • --cafile <string>
  • --certfile <string>
  • --keyfile <string>
  • -o, --orderer <string>
  • --ordererTLSHostnameOverride <string>
  • --tls
  • --transient <string>
If `--orderer` flag is not specified, the command will attempt to retrieve
the orderer information for the channel from the peer before issuing the
invoke command.
Invoke Usage

Here is an example of the peer chaincode invoke command, which invokes the chaincode named mycc at version 1.0 on channel mychannel, requesting to move 10 units from variable a to variable b:

  • peer chaincode invoke -o orderer.example.com:7050 -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
    
    2018-02-22 16:34:27.069 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:34:27.069 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    .
    .
    .
    2018-02-22 16:34:27.106 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> DEBU 00a ESCC invoke result: version:1 response:<status:200 message:"OK" > payload:"\n \237mM\376? [\214\002 \332\204\035\275q\227\2132A\n\204&\2106\037W|\346#\3413\274\022Y\nE\022\024\n\004lscc\022\014\n\n\n\004mycc\022\002\010\003\022-\n\004mycc\022%\n\007\n\001a\022\002\010\003\n\007\n\001b\022\002\010\003\032\007\n\001a\032\00290\032\010\n\001b\032\003210\032\003\010\310\001\"\013\022\004mycc\032\0031.0" endorsement:<endorser:"\n\007Org1MSP\022\262\006-----BEGIN CERTIFICATE-----\nMIICLjCCAdWgAwIBAgIRAJYomxY2cqHA/fbRnH5a/bwwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzEuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzEuZXhhbXBsZS5jb20wHhcNMTgwMjIyMTYyODE0WhcNMjgwMjIwMTYyODE0\nWjBwMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzETMBEGA1UECxMKRmFicmljUGVlcjEfMB0GA1UEAxMWcGVl\ncjAub3JnMS5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABDEa\nWNNniN3qOCQL89BGWfY39f5V3o1pi//7JFDHATJXtLgJhkK5KosDdHuKLYbCqvge\n46u3AC16MZyJRvKBiw6jTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAA\nMCsGA1UdIwQkMCKAIN7dJR9dimkFtkus0R5pAOlRz5SA3FB5t8Eaxl9A7lkgMAoG\nCCqGSM49BAMCA0cAMEQCIC2DAsO9QZzQmKi8OOKwcCh9Gd01YmWIN3oVmaCRr8C7\nAiAlQffq2JFlbh6OWURGOko6RckizG8oVOldZG/Xj3C8lA==\n-----END CERTIFICATE-----\n" signature:"0D\002 \022_\342\350\344\231G&\237\n\244\375\302J\220l\302\345\210\335D\250y\253P\0214:\221e\332@\002 \000\254\361\224\247\210\214L\277\370\222\213\217\301\r\341v\227\265\277\336\256^\217\336\005y*\321\023\025\367" >
    2018-02-22 16:34:27.107 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 00b Chaincode invoke successful. result: status:200
    2018-02-22 16:34:27.107 UTC [main] main -> INFO 00c Exiting.....
    

    Here you can see that the invoke was submitted successfully based on the log message:

    2018-02-22 16:34:27.107 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 00b Chaincode invoke successful. result: status:200
    
A successful response indicates that the transaction was submitted for ordering
successfully. The transaction will then be added to a block and, finally, validated
or invalidated by each peer on the channel.

peer chaincode list

List Description

The peer chaincode list command allows administrators to list the chaincodes installed on a peer or the chaincodes instantiated on a channel of which the peer is a member.

List Syntax

The peer chaincode list command has the following syntax:

peer chaincode list [--installed|--instantiated -C <channel-name>]
List Flags

The peer chaincode instantiate command has the following command-specific flags:

  • -C, --channelID <string>

    Name of the channel to list instantiated chaincodes for

  • --installed

    Use this flag to list the installed chaincodes on a peer

  • --instantiated

    Use this flag to list the instantiated chaincodes on a channel that the peer is a member of

List Usage

Here are some examples of the peer chaincode list command:

  • Using the --installed flag to list the chaincodes installed on a peer.

    peer chaincode list --installed
    
    Get installed chaincodes on peer:
    Name: mycc, Version: 1.0, Path: github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02, Id: 8cc2730fdafd0b28ef734eac12b29df5fc98ad98bdb1b7e0ef96265c3d893d61
    2018-02-22 17:07:13.476 UTC [main] main -> INFO 001 Exiting.....
    

    You can see that the peer has installed a chaincode called mycc which is at version 1.0.

  • Using the --instantiated in combination with the -C (channel ID) flag to list the chaincodes instantiated on a channel.

    peer chaincode list --instantiated -C mychannel
    
    Get instantiated chaincodes on channel mychannel:
    Name: mycc, Version: 1.0, Path: github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02, Escc: escc, Vscc: vscc
    2018-02-22 17:07:42.969 UTC [main] main -> INFO 001 Exiting.....
    

    You can see that chaincode mycc at version 1.0 is instantiated on channel mychannel.

peer chaincode package

Package Description

The peer chaincode package command allows administrators to package the materials necessary to perform a chaincode install. This ensures the same chaincode package can be consistently installed on multiple peers.

Package Syntax

The peer chaincode package command has the following syntax:

peer chaincode package [output-file] [flags]
Package Flags

The peer chaincode package command has the following command-specific flags:

  • -c, --ctor <string>

    Constructor message for the chaincode in JSON format (default “{}”)

  • -i, --instantiate-policy <string>

    Instantiation policy for the chaincode. Currently only policies that require utmost 1 signature (e.g., “OR (‘Org1MSP.peer’,’Org2MSP.peer’)”) are supported.

  • -l, --lang <string>

    Language the chaincode is written in (default “golang”)

  • -n, --name <string>

    Name of the chaincode that is being installed. It may consist of alphanumerics, dashes, and underscores

  • -p, --path <string>

    Path to the chaincode that is being packaged. For Golang (-l golang) chaincodes, this is the path relative to the GOPATH. For Node.js (-l node) chaincodes, this is either the absolute path or the relative path from where the package command is being performed

  • -s, --cc-package

    Create a package for storing chaincode ownership information in addition the raw chaincode deployment spec (however, see note below.)

  • -S, --sign

    Used with the -s flag, specify this flag to add owner endorsements to the package using the local MSP (however, see note below.)

  • -v, --version <string>

    Version of the chaincode that is being installed. It may consist of alphanumerics, dashes, underscores, periods, and plus signs

The metadata from `-s` and `-S` commands are not currently used. These commands
are meant for future extensions and will likely undergo implementation changes.
It is recommended that they are not used.
Package Usage

Here is an example of the peer chaincode package command, which packages the chaincode named mycc at version 1.1, creates the chaincode deployment spec, signs the package using the local MSP, and outputs it as ccpack.out:

  • peer chaincode package ccpack.out -n mycc -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -v 1.1 -s -S
    
    .
    .
    .
    2018-02-22 17:27:01.404 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
    2018-02-22 17:27:01.405 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
    .
    .
    .
    2018-02-22 17:27:01.879 UTC [chaincodeCmd] chaincodePackage -> DEBU 011 Packaged chaincode into deployment spec of size <3426>, with args = [ccpack.out]
    2018-02-22 17:27:01.879 UTC [main] main -> INFO 012 Exiting.....
    

peer chaincode query

Query Description

The peer chaincode query command allows the chaincode to be queried by calling the Invoke method on the chaincode. The difference between the query and the invoke subcommands is that, on successful response, invoke proceeds to submit a transaction to the orderer whereas query just outputs the response, successful or otherwise, to stdout.

Query Syntax

The peer chaincode query command has the following syntax:

peer chaincode query [flags]
Query Flags

The peer chaincode query command has the following command-specific flags:

  • -C, --channelID <string>

    Name of the channel where the chaincode should be queried

  • -c, --ctor <string>

    Constructor message for the chaincode in JSON format (default “{}”)

  • -n, --name <string>

    Name of the chaincode that is being queried

  • -r --raw

    Output the query value as raw bytes (default)

  • -x --hex

    Output the query value byte array in hexadecimal. Incompatible with –raw

The global peer command flag also applies:

  • --transient <string>
Query Usage

Here is an example of the peer chaincode query command, which queries the peer ledger for the chaincode named mycc at version 1.0 for the value of variable a:

  • peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
    
    2018-02-22 16:34:30.816 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:34:30.816 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    Query Result: 90
    

    You can see from the output that variable a had a value of 90 at the time of the query.

peer chaincode signpackage

signpackage Description

The peer chaincode signpackage command is used to add a signature to a given chaincode package created with the peer chaincode package command using -s and -S options.

signpackge Syntax

The peer chaincode signpackage command has the following syntax:

peer chaincode signpackage <inputpackage> <outputpackage>
signpackage Usage

Here is an example of the peer chaincode signpackage command, which accepts an existing signed package and creates a new one with signature of the local MSP appended to it.

peer chaincode signpackage ccwith1sig.pak ccwith2sig.pak
Wrote signed package to ccwith2sig.pak successfully
2018-02-24 19:32:47.189 EST [main] main -> INFO 002 Exiting.....

peer chaincode upgrade

Upgrade Description

The peer chaincode upgrade command allows administrators to upgrade the chaincode instantiated on a channel to a newer version.

Upgrade Syntax

The peer chaincode upgrade command has the following syntax:

peer chaincode upgrade [flags]
Upgrade Flags

The peer chaincode upgrade command has the following command-specific flags:

  • -C, --channelID <string>

    Name of the channel where the chaincode should be upgraded

  • -c, --ctor <string>

    Constructor message for the chaincode in JSON format (default “{}”)

  • -E, --escc <string>

    Name of the endorsement system chaincode to be used for this chaincode (default “escc”)

  • -n, --name <string>

    Name of the chaincode that is being upgraded

  • -P, --policy <string>

    Endorsement policy associated to this chaincode. By default fabric will generate an endorsement policy equivalent to “any member from the organizations currently in the channel”

  • -v, --version <string>

    Version of the upgraded chaincode

  • -V, --vscc <string>

    Name of the verification system chaincode to be used for this chaincode (default “vscc”)

The global peer command flags also apply:

  • --cafile <string>
  • -o, --orderer <string>
  • --tls
If `--orderer` flag is not specified, the command will attempt to retrieve
the orderer information for the channel from the peer before issuing the
upgrade command.
Upgrade Usage

Here is an example of the peer chaincode upgrade command, which upgrades the chaincode named mycc at version 1.0 on channel mychannel to version 1.1, which contains a new variable c:

  • Using the --tls and --cafile global flags to upgrade the chaincode in a network with TLS enabled:

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    peer chaincode upgrade -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.2 -c '{"Args":["init","a","100","b","200","c","300"]}' -P "OR   ('Org1MSP.peer','Org2MSP.peer')"
    
    .
    .
    .
    2018-02-22 18:26:31.433 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
    2018-02-22 18:26:31.434 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
    2018-02-22 18:26:31.435 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode enabled
    2018-02-22 18:26:31.435 UTC [chaincodeCmd] upgrade -> DEBU 006 Get upgrade proposal for chaincode <name:"mycc" version:"1.1" >
    .
    .
    .
    2018-02-22 18:26:46.687 UTC [chaincodeCmd] upgrade -> DEBU 009 endorse upgrade proposal, get response <status:200 message:"OK" payload:"\n\004mycc\022\0031.1\032\004escc\"\004vscc*,\022\014\022\n\010\001\022\002\010\000\022\002\010\001\032\r\022\013\n\007Org1MSP\020\003\032\r\022\013\n\007Org2MSP\020\0032f\n \261g(^v\021\220\240\332\251\014\204V\210P\310o\231\271\036\301\022\032\205fC[|=\215\372\223\022 \311b\025?\323N\343\325\032\005\365\236\001XKj\004E\351\007\247\265fu\305j\367\331\275\253\307R\032 \014H#\014\272!#\345\306s\323\371\350\364\006.\000\356\230\353\270\263\215\217\303\256\220i^\277\305\214: \375\200zY\275\203}\375\244\205\035\340\226]l!uE\334\273\214\214\020\303\3474\360\014\234-\006\315B\031\022\010\022\006\010\001\022\002\010\000\032\r\022\013\n\007Org1MSP\020\001" >
    .
    .
    .
    2018-02-22 18:26:46.693 UTC [chaincodeCmd] upgrade -> DEBU 00c Get Signed envelope
    2018-02-22 18:26:46.693 UTC [chaincodeCmd] chaincodeUpgrade -> DEBU 00d Send signed envelope to orderer
    2018-02-22 18:26:46.908 UTC [main] main -> INFO 00e Exiting.....
    
  • Using only the command-specific options to upgrade the chaincode in a network with TLS disabled:

    
    .
    .
    .
    2018-02-22 18:28:31.433 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
    2018-02-22 18:28:31.434 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
    2018-02-22 18:28:31.435 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode enabled
    2018-02-22 18:28:31.435 UTC [chaincodeCmd] upgrade -> DEBU 006 Get upgrade proposal for chaincode <name:"mycc" version:"1.1" >
    .
    .
    .
    2018-02-22 18:28:46.687 UTC [chaincodeCmd] upgrade -> DEBU 009 endorse upgrade proposal, get response <status:200 message:"OK" payload:"\n\004mycc\022\0031.1\032\004escc\"\004vscc*,\022\014\022\n\010\001\022\002\010\000\022\002\010\001\032\r\022\013\n\007Org1MSP\020\003\032\r\022\013\n\007Org2MSP\020\0032f\n \261g(^v\021\220\240\332\251\014\204V\210P\310o\231\271\036\301\022\032\205fC[|=\215\372\223\022 \311b\025?\323N\343\325\032\005\365\236\001XKj\004E\351\007\247\265fu\305j\367\331\275\253\307R\032 \014H#\014\272!#\345\306s\323\371\350\364\006.\000\356\230\353\270\263\215\217\303\256\220i^\277\305\214: \375\200zY\275\203}\375\244\205\035\340\226]l!uE\334\273\214\214\020\303\3474\360\014\234-\006\315B\031\022\010\022\006\010\001\022\002\010\000\032\r\022\013\n\007Org1MSP\020\001" >
    .
    .
    .
    2018-02-22 18:28:46.693 UTC [chaincodeCmd] upgrade -> DEBU 00c Get Signed envelope
    2018-02-22 18:28:46.693 UTC [chaincodeCmd] chaincodeUpgrade -> DEBU 00d Send signed envelope to orderer
    2018-02-22 18:28:46.908 UTC [main] main -> INFO 00e Exiting.....
    

peer channel

Description

The peer channel command allows administrators to perform channel related operations on a peer, such as joining a channel or listing the channels to which a peer is joined.

Syntax

The peer channel command has the following syntax:

peer channel create       [flags]
peer channel fetch        [flags]
peer channel getinfo      [flags]
peer channel join         [flags]
peer channel list         [flags]
peer channel signconfigtx [flags]
peer channel update       [flags]

For brevity, we often refer to a command (peer), a subcommand (channel), or subcommand option (fetch) simply as a command.

The different command options (create, fetch...) relate to the different channel operations that are relevant to a peer. For example, use the peer channel join command to join a peer to a channel, or the peer channel list command to show the channels to which a peer is joined.

Each peer channel subcommand is described together with its options in its own section in this topic.

Flags

Each peer channel command option has a set of flags specific to it, and these are described with the relevant subcommand option.

All peer channel command options also have a set of global flags that can be applied to peer channel command options.

The global flags are as follows:

  • --cafile <string>

    where <string> is a fully qualified path to a file containing a PEM-encoded certificate chain of the Certificate Authority of the orderer with whom the peer is communicating. Use in conjunction with the --tls flag.

  • --certfile <string>

    where <string> is a fully qualified path to a file containing a PEM-encoded X.509 certificate used for mutual authentication with the orderer. Use in conjunction with the --clientauth flag.

  • --clientauth

    Use this flag to enable mutual TLS communication with the orderer. Use in conjunction with the --certfile and --keyfile flags.

  • --keyfile <string>

    where <string> is a fully qualified path to a file containing a PEM-encoded X.509 private key used for mutual authentication with the orderer. Use in conjunction with the --clientauth flag.

  • -o, --orderer <string>

    where <string> is the fully qualified address and port of the orderer with whom the peer is communicating. If the port is not specified, it will default to port 7050.

  • --ordererTLSHostnameOverride <string>

    where <string> is the hostname override to use when using TLS to communicate with the orderer specified by the --orderer flag. It is necessary to use this flag when the TLS handshake phase of communications between the peer and the orderer uses a different hostname than the subsequent message exchange phase. Use in conjunction with the --tls flag.

  • --tls

    Use this flag to enable TLS communications with an orderer. The certificates identified by --cafile will be used by TLS to authenticate the orderer.

Usage

Here’s an example that uses the --orderer global flag on the peer channel create command.

  • Create a sample channel mychannel defined by the configuration transaction contained in file ./createchannel.txn. Use the orderer at orderer.example.com:7050.

    peer channel create -c mychannel -f ./createchannel.txn --orderer orderer.example.com:7050
    
    2018-02-25 08:23:57.548 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 08:23:57.626 UTC [channelCmd] InitCmdFactory -> INFO 019 Endorser and orderer connections initialized
    2018-02-25 08:23:57.834 UTC [channelCmd] readBlock -> DEBU 020 Received block: 0
    2018-02-25 08:23:57.835 UTC [main] main -> INFO 021 Exiting.....
    

    Block 0 is returned indicating that the channel has been successfully created.

peer channel create

Create Description

The peer channel create command allows administrators to create a new channel. This command connects to an orderer to perform this function – it is not performed on the peer, even though the peer command is used.

To create a channel, the administrator uses the command to submit a configuration update transaction to the orderer. This transaction describes the configuration changes required to create a new channel. Moreover, this transaction must be signed by the required organizations as defined by the current orderer configuration. Configuration transactions can be generated by the configtxgen command and signed by the peer channel signconfigtx command.

Create Syntax

The peer channel create command has the following syntax:

peer channel create [flags]
Create Flags

The peer channel create command has the following command specific flags:

  • -c, --channelID <string>

    required, where <string> is the name of the channel which is to be created.

  • -f, --file <string>

    required, where <string> identifies a file which contains the configuration transaction required to create this channel. It can be generated by configtxgen command.

  • -t, --timeout <integer>

    optional, where <integer> specifies channel creation timeout in seconds. If not specified, the default is 5 seconds. Note that if the command times out, then the channel may or may not have been created.

The global peer command flags also apply as follows:

  • -o, --orderer required
  • --cafile optional
  • --certfile optional
  • --clientuth optional
  • --keyfile optional
  • --ordererTLSHostnameOverride optional
  • --tls optional
Create Usage

Here’s an example of the peer channel create command option.

  • Create a new channel mychannel for the network, using the orderer at ip address orderer.example.com:7050. The configuration update transaction required to create this channel is defined the file ./createchannel.txn. Wait 30 seconds for the channel to be created.

    peer channel create -c mychannel --orderer orderer.example.com:7050 -f ./createchannel.txn -t 30
    
    2018-02-23 06:31:58.568 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-23 06:31:58.669 UTC [channelCmd] InitCmdFactory -> INFO 019 Endorser and orderer connections initialized
    2018-02-23 06:31:58.877 UTC [channelCmd] readBlock -> DEBU 020 Received block: 0
    2018-02-23 06:31:58.878 UTC [main] main -> INFO 021 Exiting.....
    
    ls -l
    
    -rw-r--r-- 1 root root 11982 Feb 25 12:24 mychannel.block
    

    You can see that channel mychannel has been successfully created, as indicated in the output where block 0 (zero) is added to the blockchain for this channel and returned to the peer, where it is stored in the local directory as mychannel.block.

    Block zero is often called the genesis block as it provides the starting configuration for the channel. All subsequent updates to the channel will be captured as configuration blocks on the channel’s blockchain, each of which supersedes the previous configuration.

peer channel fetch

Fetch Description

The peer channel fetch command allows a client to fetch a block from the orderer. The block may contain a configuration transaction or user transactions.

The client must have read access to the channel. This command connects to an orderer to perform this function – it is not performed on the peer, even though the peer client command is used.

Fetch Syntax

The peer channel fetch command has the following syntax:

peer channel fetch [newest|oldest|config|(block number)] [<outputFile>] [flags]

where

  • newest

    returns the most recent block available at the orderer for the channel. This may be a user transaction block or a configuration block.

    This option will also return the block number of the most recent block.

  • oldest

    returns the oldest block available at the orderer for the channel. This may be a user transaction block or a configuration block.

    This option will also return the block number of the oldest available block.

  • config

    returns the most recent configuration block available at the orderer for the channel.

    This option will also return the block number of the most recent configuration block.

  • (block number)

    returns the requested block for the channel. This may be a user transaction block or a configuration block.

    Specifying 0 will result in the genesis block for this channel being returned (if it is still available to the network orderer).

  • <outputFile>

    specifies the name of the file where the fetched block is written. If <outputFile> is not specified, then the block is written to the local directory in a file named as follows:

    • <channelID>_newest.block
    • <channelID>_oldest.block
    • <channelID>_config.block
    • <channelID>_(block number).block
Fetch Flags

The peer channel fetch command has the following command specific flags:

  • -c, --channelID <string>

    required, where <string> is the name of the channel for which the blocks are to be fetched from the orderer.

The global peer command flags also apply:

  • -o, --orderer required
  • --cafile optional
  • --certfile optional
  • --clientuth optional
  • --keyfile optional
  • --ordererTLSHostnameOverride optional
  • --tls optional
Fetch Usage

Here’s some examples of the peer channel fetch command.

  • Using the newest option to retrieve the most recent channel block, and store it in the file mychannel.block.

    peer channel fetch newest mychannel.block -c mychannel --orderer orderer.example.com:7050
    
    2018-02-25 13:10:16.137 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 13:10:16.144 UTC [channelCmd] readBlock -> DEBU 00a Received block: 32
    2018-02-25 13:10:16.145 UTC [main] main -> INFO 00b Exiting.....
    
    ls -l
    
    -rw-r--r-- 1 root root 11982 Feb 25 13:10 mychannel.block
    

    You can see that the retrieved block is number 32, and that the information has been written to the file mychannel.block.

  • Using the (block number) option to retrieve a specific block – in this case, block number 16 – and store it in the default block file.

    peer channel fetch 16  -c mychannel --orderer orderer.example.com:7050
    
    2018-02-25 13:46:50.296 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 13:46:50.302 UTC [channelCmd] readBlock -> DEBU 00a Received block: 16
    2018-02-25 13:46:50.302 UTC [main] main -> INFO 00b Exiting.....
    
    ls -l
    
    -rw-r--r-- 1 root root 11982 Feb 25 13:10 mychannel.block
    -rw-r--r-- 1 root root  4783 Feb 25 13:46 mychannel_16.block
    

    You can see that the retrieved block is number 16, and that the information has been written to the default file mychannel_16.block.

For configuration blocks, the block file can be decoded using the configtxlator command. See this command for an example of decoded output. User transaction blocks can also be decoded, but a user program must be written to do this.

peer channel getinfo

GetInfo Description

The peer channel getinfo command allows administrators to retrieve information about the peer’s local blockchain for a particular channel. This includes the current blockchain height, and the hashes of the current block and previous block. Remember that a peer can be joined to more than one channel.

This information can be useful when administrators need to understand the current state of a peer’s blockchain, especially in comparison to other peers in the same channel.

GetInfo Syntax

The peer channel getinfo command has the following syntax:

peer channel getinfo [flags]
GetInfo Flags

The peer channel getinfo command has no specific flags.

None of the global peer command flags apply, since this command does not interact with an orderer.

GetInfo Usage

Here’s an example of the peer channel getinfo command.

  • Get information about the local peer for channel mychannel.

    peer channel getinfo -c mychannel
    
    2018-02-25 15:15:44.135 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    Blockchain info: {"height":5,"currentBlockHash":"JgK9lcaPUNmFb5Mp1qe1SVMsx3o/22Ct4+n5tejcXCw=","previousBlockHash":"f8lZXoAn3gF86zrFq7L1DzW2aKuabH9Ow6SIE5Y04a4="}
    2018-02-25 15:15:44.139 UTC [main] main -> INFO 006 Exiting.....
    

    You can see that the latest block for channel mychannel is block 5. You can also see the crytographic hashes for the most recent blocks in the channel’s blockchain.

peer channel join

Join Description

The peer channel join command allows administrators to join a peer to an existing channel. The administrator achieves this by using the command to provide a channel genesis block to the peer. The peer will then automatically retrieve the channel’s blocks from other peers in the network, or the orderer, depending on the configuration, and the availability of other peers.

The administrator can create a local genesis block for use by this command by retrieving block 0 from an existing channel using the peer channel fetch command option. The peer channel create command will also return a local genesis block when a new channel is created.

Join Syntax

The peer channel join command has the following syntax:

peer channel join [flags]
Join Flags

The peer channel join command has the following command specific flags:

  • -b, --blockpath <string>

required, where <string> identifies a file containing the channel genesis block. This block can be retrieved using the peer channel fetch command, requesting block 0 from the channel, or using the peer channel create command.

None of the global peer command flags apply, since this command does not interact with an orderer.

Join Usage

Here’s an example of the peer channel join command.

  • Join a peer to the channel defined in the genesis block identified by the file ./mychannel.genesis.block. In this example, the channel block was previously retrieved by the peer channel fetch command.

    peer channel join -b ./mychannel.genesis.block
    
    2018-02-25 12:25:26.511 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 12:25:26.571 UTC [channelCmd] executeJoin -> INFO 006 Successfully submitted proposal to join channel
    2018-02-25 12:25:26.571 UTC [main] main -> INFO 007 Exiting.....
    

    You can see that the peer has successfully made a request to join the channel.

peer channel list

List Description

The peer channel list command allows administrators list the channels to which a peer is joined.

List Syntax

The peer channel list command has the following syntax:

peer channel list [flags]
List Flags

The peer channel list command has no specific flags.

None of the global peer command flags apply, since this command does not interact with an orderer.

List Usage

Here’s an example of the peer channel list command.

  • List the channels to which a peer is joined.

    peer channel list
    
    2018-02-25 14:21:20.361 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    Channels peers has joined:
    mychannel
    2018-02-25 14:21:20.372 UTC [main] main -> INFO 006 Exiting.....
    

    You can see that the peer is joined to channel mychannel.

peer channel signconfigtx

SignConfigTx Description

The peer channel signconfigtx command helps administrators sign a configuration transaction with the peer’s identity credentials prior to submission to an orderer. Typical configuration transactions include creating a channel or updating a channel configuration.

The administrator supplies an input file to the signconfigtx command which describes the configuration transaction. The command then adds the peer’s public identity to the file, and signs the entire payload with the peer’s private key. The command uses the peer’s public and private credentials stored in its local MSP. A new file is not generated; the input file is updated in place.

signconfigtx only signs the configuration transaction; it does not create it, nor submit it to the orderer. Typically, the configuration transaction has been already created using the configtxgen command, and is subsequently submitted to the orderer by an appropriate command such as peer channel update.

SignConfigTx Syntax

The peer channel signconfigtx command has the following syntax:

peer channel signconfigtx [flags]
SignConfigTx Flags

The peer channel signconfigtx command has the following command specific flags:

  • -f, --file <string>

required, where <string> identifies a file containing the channel configuration transaction to be signed on behalf of the peer.

None of the global peer command flags apply, since this command does not interact with an orderer.

SignConfigTx Usage

Here’s an example of the peer channel signconfigtx command.

  • Sign the channel update transaction defined in the file ./updatechannel.txn. The example lists the configuration transaction file before and after the command.

    ls -l
    
    -rw-r--r--  1 anthonyodowd  staff   284 25 Feb 18:16 updatechannel.tx
    
    peer channel signconfigtx -f updatechannel.tx
    
    2018-02-25 18:16:44.456 GMT [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2018-02-25 18:16:44.459 GMT [main] main -> INFO 002 Exiting.....
    
    ls -l
    
    -rw-r--r--  1 anthonyodowd  staff  2180 25 Feb 18:16 updatechannel.tx
    

    You can see that the peer has successfully signed the configuration transaction by the increase in the size of the file updatechannel.tx from 284 bytes to 2180 bytes.

peer channel update

Update Description

The peer channel update command allows administrators to update an existing channel.

To update a channel, the administrator uses the command to submit a configuration transaction to the orderer which describes the required channel configuration changes. This transaction must be signed by the required organizations as defined in the current channel configuration. Configuration transactions can be generated by the configtxgen command and signed by the peer channel signconfigtx command.

The update transaction is sent by the command to the orderer, which validates the change is authorized, and then distributes a configuration block to every peer on the channel. In this way, every peer on the channel maintains a consistent copy of the channel configuration.

Update Syntax

The peer channel update command has the following syntax:

peer channel update [flags]
Update flags

The peer channel update command has the following command specific flags:

  • -c, --channelID <string>

    required, where <string> is the name of the channel which is to be updated.

  • -f, --file <string>

    required, where <string> identifies a transaction configuration file. This file contains the configuration change required to this channel, and it can be generated by configtxgen command.

The global peer command flags also apply as follows:

  • -o, --orderer required
  • --cafile optional
  • --certfile optional
  • --clientuth optional
  • --keyfile optional
  • --ordererTLSHostnameOverride optional
  • --tls optional
Update Usage

Here’s an example of the peer channel update command.

  • Update the channel mychannel using the configuration transaction defined in the file ./updatechannel.txn. Use the orderer at ip address orderer.example.com:7050 to send the configuration transaction to all peers in the channel to update their copy of the channel configuration.

    peer channel update -c mychannel -f ./updatechannel.txn -o orderer.example.com:7050
    
    2018-02-23 06:32:11.569 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-23 06:32:11.626 UTC [main] main -> INFO 010 Exiting.....
    

    At this point, the channel mychannel has been successfully updated.

peer version

Description

The peer version command displays the version information of the peer. It displays version, Go version, OS/architecture, if experimental features are turned on, and chaincode information. For example:

 peer:
   Version: 1.1.0-beta-snapshot-a6c3447e
   Go version: go1.9.2
   OS/Arch: linux/amd64
   Experimental features: true
   Chaincode:
    Base Image Version: 0.4.5
    Base Docker Namespace: hyperledger
    Base Docker Label: org.hyperledger.fabric
    Docker Namespace: hyperledger

Syntax

The peer version command has the following syntax:

peer version

peer logging

Description

The peer logging subcommand allows administrators to dynamically view and configure the log levels of a peer.

Syntax

The peer logging subcommand has the following syntax:

peer logging getlevel
peer logging setlevel
peer logging revertlevels

The different subcommand options (getlevel, setlevel, and revertlevels) relate to the different logging operations that are relevant to a peer.

Each peer logging subcommand is described together with its options in its own section in this topic.

peer logging getlevel

Get Level Description

The peer logging getlevel command allows administrators to get the current level for a logging module.

Get Level Syntax

The peer logging getlevel command has the following syntax:

peer logging getlevel <module-name>
Get Level Flags

The peer logging getlevel command does not have any command-specific flags.

Get Level Usage

Here is an example of the peer logging getlevel command:

  • To get the log level for module peer:

    peer logging getlevel peer
    
    2018-02-22 19:10:08.633 UTC [cli/logging] getLevel -> INFO 001 Current log level for peer module 'peer': DEBUG
    2018-02-22 19:10:08.633 UTC [main] main -> INFO 002 Exiting.....
    

peer logging setlevel

Set Level Description

The peer logging setlevel command allows administrators to set the current level for all logging modules that match the module name regular expression provided.

Set Level Syntax

The peer logging setlevel command has the following syntax:

peer logging setlevel <module-name-regular-expression> <log-level>
Set Level Flags

The peer logging setlevel command does not have any command-specific flags.

Set Level Usage

Here are some examples of the peer logging setlevel command:

  • To set the log level for modules matching the regular expression peer to log level WARNING:

    peer logging setlevel peer warning
    2018-02-22 19:14:51.217 UTC [cli/logging] setLevel -> INFO 001 Log level set for peer modules matching regular expression 'peer': WARNING
    2018-02-22 19:14:51.217 UTC [main] main -> INFO 002 Exiting.....
    
  • To set the log level for modules that match the regular expression ^gossip (i.e. all of the gossip logging submodules of the form gossip/<submodule>) to log level ERROR:

    peer logging setlevel ^gossip error
    
    2018-02-22 19:16:46.272 UTC [cli/logging] setLevel -> INFO 001 Log level set for peer modules matching regular expression '^gossip': ERROR
    2018-02-22 19:16:46.272 UTC [main] main -> INFO 002 Exiting.....
    

peer logging revertlevels

Revert Levels Description

The peer logging revertlevels command allows administrators to revert the log levels of all modules to their level at the time the peer completed its startup process.

Revert Levels Syntax

The peer logging revertlevels command has the following syntax:

peer logging revertlevels
Revert Levels Flags

The peer logging revertlevels command does not have any command-specific flags.

Revert Levels Usage

Here is an example of the peer logging revertlevels command:

  • peer logging revertlevels
    
    2018-02-22 19:18:38.428 UTC [cli/logging] revertLevels -> INFO 001 Log levels reverted to the levels at the end of peer startup.
    2018-02-22 19:18:38.428 UTC [main] main -> INFO 002 Exiting.....
    

peer node

Description

The peer node subcommand allows an administrator to start a peer node or check the status of a peer node.

Syntax

The peer node subcommand has the following syntax:

peer node start [flags]
peer node status

peer node start

Start Description

The peer node start command allows administrators to start the peer node process.

The peer node process can be configured using configuration file core.yaml, which must be located in the directory specified by the environment variable FABRIC_CFG_PATH. For docker deployments, core.yaml is pre-configured in the peer container FABRIC_CFG_PATH directory. For native binary deployments, core.yaml is included with the release artifact distribution. The configuration properties located in core.yaml can be overridden using environment variables. For example, peer.mspConfigPath configuration property can be specified by defining CORE_PEER_MSPCONFIGPATH environment variable, where **CORE_** is the prefix for the environment variables.

Start Syntax

The peer node start command has the following syntax:

peer node start [flags]
Start Flags

The peer node start command has the following command specific flag:

  • --peer-chaincodedev

    starts peer node in chaincode development mode. Normally chaincode containers are started and maintained by peer. However in devlopment mode, chaincode is built and started by the user. This mode is useful during chaincode development phase for iterative development. See more information on development mode in the chaincode tutorial.

The global peer command flags also apply as described in the peer command topic:

  • –logging-level

peer node status

Status Description

The peer node status command allows administrators to see the status of the peer node process. It will show the status of the peer node process running at the peer.address specified in the peer configuration, or overridden by CORE_PEER_ADDRESS environment variable.

Status Syntax

The peer node status command has the following syntax:

peer node status
Status Flags

The peer node status command has no command specific flags.

configtxgen

Description

The configtxgen command allows users to create and inspect channel config related artifacts. The content of the generated artifacts is dictated by the contents of configtx.yaml.

Syntax

The configtxgen tool has no sub-commands, but supports flags which can be set to accomplish a number of tasks.

Usage of configtxgen:
  -asOrg string
        Performs the config generation as a particular organization (by name), only including values in the write set that org (likely) has privilege to set
  -channelID string
        The channel ID to use in the configtx (default "testchainid")
  -inspectBlock string
        Prints the configuration contained in the block at the specified path
  -inspectChannelCreateTx string
        Prints the configuration contained in the transaction at the specified path
  -outputAnchorPeersUpdate string
        Creates an config update to update an anchor peer (works only with the default channel creation, and only for the first update)
  -outputBlock string
        The path to write the genesis block to (if set)
  -outputCreateChannelTx string
        The path to write a channel creation configtx to (if set)
  -printOrg string
        Prints the definition of an organization as JSON. (useful for adding an org to a channel manually)
  -profile string
        The profile from configtx.yaml to use for generation. (default "SampleInsecureSolo")
  -version
        Show version information

Usage

Output a genesis block

Write a genesis block to genesis_block.pb for channel orderer-system-channel for profile SampleSingleMSPSoloV1_1.

configtxgen -outputBlock genesis_block.pb -profile SampleSingleMSPSoloV1_1 -channelID orderer-system-channel
Output a channel creation tx

Write a channel creation transaction to create_chan_tx.pb for profile SampleSingleMSPChannelV1_1.

configtxgen -outputCreateChannelTx create_chan_tx.pb -profile SampleSingleMSPChannelV1_1 -channelID application-channel-1
Inspect a genesis block

Print the contents of a genesis block named genesis_block.pb to the screen as JSON.

configtxgen -inspectBlock genesis_block.pb
Inspect a channel creation tx

Print the contents of a channel creation tx named create_chan_tx.pb to the screen as JSON.

configtxgen -inspectChannelCreateTx create_chan_tx.pb
Output anchor peer tx

Output a configuration update transaction to anchor_peer_tx.pb which sets the anchor peers for organization Org1 as defined in profile SampleSingleMSPChannelV1_1 based on configtx.yaml.

configtxgen -outputAnchorPeersUpdate anchor_peer_tx.pb -profile SampleSingleMSPChannelV1_1 -asOrg Org1

Configuration

The configtxgen tool’s output is largely controlled by the content of configtx.yaml. This file is searched for at FABRIC_CFG_PATH and must be present for configtxgen to operate.

This configuration file may be edited, or, individual properties may be overridden by setting environment variables, such as CONFIGTX_ORDERER_ORDERERTYPE=kafka.

For many configtxgen operations, a profile name must be supplied. Profiles are a way to express multiple similar configurations in a single file. For instance, one profile might define a channel with 3 orgs, and another might define one with 4 orgs. To accomplish this without the length of the file becoming burdensome, configtx.yaml depends on the standard YAML feature of anchors and references. Base parts of the configuration are tagged with an anchor like &OrdererDefaults and then merged into a profile with a reference like <<: *OrdererDefaults. Note, when configtxgen is operating under a profile, environment variable overrides do not need to include the profile prefix and may be referenced relative to the root element of the profile. For instance, do not specify CONFIGTX_PROFILE_SAMPLEINSECURESOLO_ORDERER_ORDERERTYPE, instead simply omit the profile specifics and use the CONFIGTX prefix followed by the elements relative to the profile name such as CONFIGTX_ORDERER_ORDERERTYPE.

Refer to the sample configtx.yaml shipped with Fabric for all possible configuration options. You may find this file in the config directory of the release artifacts tar, or you may find it under the sampleconfig folder if you are building from source.

configtxlator

Description

The configtxlator command allows users to translate between protobuf and JSON versions of fabric data structures and create config updates. The command may either start a REST server to expose its functions over HTTP or may be utilized directly as a command line tool.

Syntax

The configtxlator tool has four sub-commands.

configtxlator start

Starts the REST server.

usage: configtxlator start [<flags>]

Start the configtxlator REST server

Flags:
  --help                Show context-sensitive help (also try --help-long and --help-man).
  --hostname="0.0.0.0"  The hostname or IP on which the REST server will listen
  --port=7059           The port on which the REST server will listen
configtxlator proto_encode

Converts JSON documents into protobuf messages.

usage: configtxlator proto_encode --type=TYPE [<flags>]

Converts a JSON document to protobuf.

Flags:
  --help                Show context-sensitive help (also try --help-long and --help-man).
  --type=TYPE           The type of protobuf structure to encode to. For example, 'common.Config'.
  --input=/dev/stdin    A file containing the JSON document.
  --output=/dev/stdout  A file to write the output to.
configtxlator proto_decode

Converts protobuf messages into JSON documents.

usage: configtxlator proto_decode --type=TYPE [<flags>]

Converts a proto message to JSON.

Flags:
  --help                Show context-sensitive help (also try --help-long and --help-man).
  --type=TYPE           The type of protobuf structure to decode from. For example, 'common.Config'.
  --input=/dev/stdin    A file containing the proto message.
  --output=/dev/stdout  A file to write the JSON document to.
configtxlator compute_update

Computes a config update based on an original, and modified config.

usage: configtxlator compute_update --channel_id=CHANNEL_ID [<flags>]

Takes two marshaled common.Config messages and computes the config update which transitions between the two.

Flags:
  --help                   Show context-sensitive help (also try --help-long and --help-man).
  --original=ORIGINAL      The original config message.
  --updated=UPDATED        The updated config message.
  --channel_id=CHANNEL_ID  The name of the channel for this update.
  --output=/dev/stdout     A file to write the JSON document to.
configtxlator version

Shows the version.

usage: configtxlator version

Show version information

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

Examples

Decoding

Decode a block named fabric_block.pb to JSON and print to stdout.

configtxlator proto_decode --input fabric_block.pb --type common.Block

Alternatively, after starting the REST server, the following curl command performs the same operation through the REST API.

curl -X POST --data-binary @fabric_block.pb "${CONFIGTXLATOR_URL}/protolator/decode/common.Block"
Encoding

Convert a JSON document for a policy from stdin to a file named policy.pb.

configtxlator proto_encode --type common.Policy --output policy.pb

Alternatively, after starting the REST server, the following curl command performs the same operation through the REST API.

curl -X POST --data-binary /dev/stdin "${CONFIGTXLATOR_URL}/protolator/encode/common.Policy" > policy.pb
Pipelines

Compute a config update from original_config.pb and modified_config.pb and decode it to JSON to stdout.

configtxlator compute_update --channel_id testchan --original original_config.pb --updated modified_config.pb | configtxlator proto_decode --type common.ConfigUpdate

Alternatively, after starting the REST server, the following curl commands perform the same operations through the REST API.

curl -X POST -F channel=testchan -F "original=@original_config.pb" -F "updated=@modified_config.pb" "${CONFIGTXLATOR_URL}/configtxlator/compute/update-from-configs" | curl -X POST --data-binary /dev/stdin "${CONFIGTXLATOR_URL}/protolator/encode/common.ConfigUpdate"

Additional Notes

The tool name is a portmanteau of configtx and translator and is intended to convey that the tool simply converts between different equivalent data representations. It does not generate configuration. It does not submit or retrieve configuration. It does not modify configuration itself, it simply provides some bijective operations between different views of the configtx format.

There is no configuration file configtxlator nor any authentication or authorization facilities included for the REST server. Because configtxlator does not have any access to data, key material, or other information which might be considered sensitive, there is no risk to the owner of the server in exposing it to other clients. However, because the data sent by a user to the REST server might be confidential, the user should either trust the administrator of the server, run a local instance, or operate via the CLI.

Cryptogen Commands

Cryptogen is an utility for generating Hyperledger Fabric key material. It is mainly meant to be used for testing environment.

Syntax

The cryptogen command has different subcommands within it:

cryptogen [subcommand]

as follows

cryptogen generate
cryptogen showtemplate
cryptogen version
cryptogen extend
cryptogen help
cryptogen

These subcommands separate the different functions provided by they utility.

Within each subcommand there are many different options available and because of this, each is described in its own section in this topic.

If a command option is not specified then cryptogen will return some high level help text as described in in the --help flag below.

cryptogen flags

The cryptogen command also has a set of associated flags:

cryptogen [flags]

as follows

cryptogen --help
cryptogen generate --help

These flags provide more information about cryptogen, and are designated global because they can be used at any command level. For example the --help flag can provide help on the cryptogen command, the cryptogen generate command, as well as their respective options.

Flag details

  • --help

    Use help to get brief help text for the cryptogen command. The help flag can often be used at different levels to get individual command help, or even a help on a command option. See individual commands for more detail.

Usage

Here’s some examples using the different available flags on the peer command.

  • --help flag
cryptogen --help

usage: cryptogen [<flags>] <command> [<args> ...]

Utility for generating Hyperledger Fabric key material

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

Commands:
  help [<command>...]
    Show help.

  generate [<flags>]
    Generate key material

  showtemplate
    Show the default configuration template

  version
    Show version information

  extend [<flags>]
    Extend existing network
The cryptogen generate Command

The cryptogen generate command allows the generation of the key material.

Syntax

The cryptogen generate command has the following syntax:

cryptogen generate [<flags>]

cryptogen generate flags

The cryptogen generate command has different flags available to it, and because of this, each flag is described in the relevant command topic.

cryptogen generate [flags]

as follows

cryptogen generate --output="crypto-config"
cryptogen generate --config=CONFIG

The global cryptogen command flags also apply as described in the cryptogen command flags:

  • --help

Flag details

  • --output="crypto-config"

    the output directory in which to place artifacts.

  • --config=CONFIG

    the configuration template to use.

Usage

Here’s some examples using the different available flags on the cryptogen generate command.

./cryptogen generate --output="crypto-config"

org1.example.com
org2.example.com
The cryptogen showtemplate command

The cryptogen showtemplate command shows the default configuration template.

Syntax

The cryptogen showtemplate command has the following syntax:

cryptogen showtemplate

Usage

Output from the cryptogen showtemplate command is following:

cryptogen showtemplate

# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
  # ---------------------------------------------------------------------------
  # Orderer
  # ---------------------------------------------------------------------------
  - Name: Orderer
    Domain: example.com

    # ---------------------------------------------------------------------------
    # "Specs" - See PeerOrgs below for complete description
    # ---------------------------------------------------------------------------
    Specs:
      - Hostname: orderer

# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
  # ---------------------------------------------------------------------------
  # Org1
  # ---------------------------------------------------------------------------
  - Name: Org1
    Domain: org1.example.com
    EnableNodeOUs: false

    # ---------------------------------------------------------------------------
    # "CA"
    # ---------------------------------------------------------------------------
    # Uncomment this section to enable the explicit definition of the CA for this
    # organization.  This entry is a Spec.  See "Specs" section below for details.
    # ---------------------------------------------------------------------------
    # CA:
    #    Hostname: ca # implicitly ca.org1.example.com
    #    Country: US
    #    Province: California
    #    Locality: San Francisco
    #    OrganizationalUnit: Hyperledger Fabric
    #    StreetAddress: address for org # default nil
    #    PostalCode: postalCode for org # default nil

    # ---------------------------------------------------------------------------
    # "Specs"
    # ---------------------------------------------------------------------------
    # Uncomment this section to enable the explicit definition of hosts in your
    # configuration.  Most users will want to use Template, below
    #
    # Specs is an array of Spec entries.  Each Spec entry consists of two fields:
    #   - Hostname:   (Required) The desired hostname, sans the domain.
    #   - CommonName: (Optional) Specifies the template or explicit override for
    #                 the CN.  By default, this is the template:
    #
    #                              "{{.Hostname}}.{{.Domain}}"
    #
    #                 which obtains its values from the Spec.Hostname and
    #                 Org.Domain, respectively.
    #   - SANS:       (Optional) Specifies one or more Subject Alternative Names
    #                 to be set in the resulting x509. Accepts template
    #                 variables {{.Hostname}}, {{.Domain}}, {{.CommonName}}. IP
    #                 addresses provided here will be properly recognized. Other
    #                 values will be taken as DNS names.
    #                 NOTE: Two implicit entries are created for you:
    #                     - {{ .CommonName }}
    #                     - {{ .Hostname }}
    # ---------------------------------------------------------------------------
    # Specs:
    #   - Hostname: foo # implicitly "foo.org1.example.com"
    #     CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
    #     SANS:
    #       - "bar.{{.Domain}}"
    #       - "altfoo.{{.Domain}}"
    #       - "{{.Hostname}}.org6.net"
    #       - 172.16.10.31
    #   - Hostname: bar
    #   - Hostname: baz

    # ---------------------------------------------------------------------------
    # "Template"
    # ---------------------------------------------------------------------------
    # Allows for the definition of 1 or more hosts that are created sequentially
    # from a template. By default, this looks like "peer%d" from 0 to Count-1.
    # You may override the number of nodes (Count), the starting index (Start)
    # or the template used to construct the name (Hostname).
    #
    # Note: Template and Specs are not mutually exclusive.  You may define both
    # sections and the aggregate nodes will be created for you.  Take care with
    # name collisions
    # ---------------------------------------------------------------------------
    Template:
      Count: 1
      # Start: 5
      # Hostname: {{.Prefix}}{{.Index}} # default
      # SANS:
      #   - "{{.Hostname}}.alt.{{.Domain}}"

    # ---------------------------------------------------------------------------
    # "Users"
    # ---------------------------------------------------------------------------
    # Count: The number of user accounts _in addition_ to Admin
    # ---------------------------------------------------------------------------
    Users:
      Count: 1

  # ---------------------------------------------------------------------------
  # Org2: See "Org1" for full specification
  # ---------------------------------------------------------------------------
  - Name: Org2
    Domain: org2.example.com
    EnableNodeOUs: false
    Template:
      Count: 1
    Users:
      Count: 1
The cryptogen extend Command

The cryptogen extend command allows to extend an existing network, meaning generating all the additional key material needed by the new added entities.

Syntax

The cryptogen extend command has the following syntax:

cryptogen extend [<flags>]

cryptogen extend flags

The cryptogen extend command has different flags available to it, and because of this, each flag is described in the relevant command topic.

cryptogen extend [flags]

as follows

cryptogen extend --input="crypto-config"
cryptogen extend --config=CONFIG

The global cryptogen command flags also apply as described in the cryptogen command flags:

  • --help

Flag details

  • --input="crypto-config"

    the output directory in which to place artifacts.

  • --config=CONFIG

    the configuration template to use.

Usage

Here’s some examples using the different available flags on the cryptogen extend command.

cryptogen extend --input="crypto-config" --config=config.yaml

org3.example.com

Where config.yaml add a new peer organization called org3.example.com

Fabric-CA 命令

The Hyperledger Fabric CA is a Certificate Authority (CA) for Hyperledger Fabric. The commands available for the fabric-ca client and fabric-ca server are described in the links below.

Hyperledger Fabric CA 是 Hyperledger Fabric的证书颁发机构。 fabric-ca 客户端和 fabric-ca 服务端可用的命令在下面的链接里描述。

Fabric-CA 客户端

The fabric-ca-client command allows you to manage identities (including attribute management) and certificates (including renewal and revocation).

fabric-ca-client 命令允许您管理身份(包括属性管理)和证书(包括更新和撤销)。

More information on fabric-ca-client commands can be found here.

更多关于 fabric-ca-client 的命令可以在 这里 找到。

Fabric-CA 服务端

The fabric-ca-server command allows you to initialize and start a server process which may host one or more certificate authorities.

fabric-ca-server命令允许您初始化和启动一个可能运行一个或多个证书颁发机构的服务进程。

More information on fabric-ca-server commands can be found here.

更多关于 fabric-ca-server 的命令可以在 这里 找到。

架构参考

Architecture Explained – 架构解读

The Hyperledger Fabric architecture delivers the following advantages:

Hyperledger Fabric架构提供了以下优点:

  • Chaincode trust flexibility. The architecture separates trust assumptions for chaincodes (blockchain applications) from trust assumptions for ordering. In other words, the ordering service may be provided by one set of nodes (orderers) and tolerate some of them to fail or misbehave, and the endorsers may be different for each chaincode.
  • 链码信任灵活性。 该架构将对链码(区块链应用)的 信任设想 同对排序服务的信任设想分离 开来。换言之,排序服务是由node集合组成,并容忍其中一些失效或者恶意节点,而对每个 链码的背书节点可以不同。
  • Scalability. As the endorser nodes responsible for particular chaincode are orthogonal to the orderers, the system may scale better than if these functions were done by the same nodes. In particular, this results when different chaincodes specify disjoint endorsers, which introduces a partitioning of chaincodes between endorsers and allows parallel chaincode execution (endorsement). Besides, chaincode execution, which can potentially be costly, is removed from the critical path of the ordering service.
  • 水平扩展性。 由于特定链码的背书节点和排序节点是正交的,相比于这些功能在同一个节点 上,系统可以更好地伸缩。尤其是当不同的链码指定不同的背书节点时,相当于引入了背书 节点间链码的分区,从而允许链码并行地执行(背书)。另外,链码的执行可能非常耗时, 将其从排序服务这种关键路径上移除。
  • Confidentiality. The architecture facilitates deployment of chaincodes that have confidentiality requirements with respect to the content and state updates of its transactions.
  • 保密性。 考虑交易的内容、状态更新的保密性,该框架便于部署满足这种保密需求的链码。
  • Consensus modularity. The architecture is modular and allows pluggable consensus (i.e., ordering service) implementations.
  • 一致性算法模块化 该框架是 模块化 的,允许插件化一致性算法的实现 (比如,排序服务) 。

Part I: Elements of the architecture relevant to Hyperledger Fabric v1

章节 I: 该框架中与 Hyperledger Fabric v1 相关的基础

  1. System architecture
  2. Basic workflow of transaction endorsement
  3. Endorsement policies
  1. 系统架构
  2. 交易背书的基本流程
  3. 背书策略

Part II: Post-v1 elements of the architecture

章节 II: v1版本后新增的框架基础

  1. Ledger checkpointing (pruning)
  1. 账本检查点 (裁剪)

1. System architecture – 系统框架

The blockchain is a distributed system consisting of many nodes that communicate with each other. The blockchain runs programs called chaincode, holds state and ledger data, and executes transactions. The chaincode is the central element as transactions are operations invoked on the chaincode. Transactions have to be “endorsed” and only endorsed transactions may be committed and have an effect on the state. There may exist one or more special chaincodes for management functions and parameters, collectively called system chaincodes.

区块链是一个有许多相互通信的节点组成的分布式系统。区块链通过运行链码程序,进行交易, 持有状态和账本数据。由于交易操作都是通过操作链码实现,链码是最中心的元素。交易必须 被背书,也只有背书过的交易才能被提交从而影响状态。为了实现管理面功能,有一个或多个特殊 的链码,统称为 系统链码

1.1. Transactions – 交易

Transactions may be of two types:

  • Deploy transactions create new chaincode and take a program as parameter. When a deploy transaction executes successfully, the chaincode has been installed “on” the blockchain.
  • Invoke transactions perform an operation in the context of previously deployed chaincode. An invoke transaction refers to a chaincode and to one of its provided functions. When successful, the chaincode executes the specified function - which may involve modifying the corresponding state, and returning an output.

交易通常有两个类型:

  • 部署交易 把一个程序作为参数创建新的链码。当部署交易成功执行,业务链码也就在 区块链上安装完成。
  • 调用交易 在上面部署的链码环境中执行操作。一个调用交易需要指向一个链码和它提供的函 数。如果成功调用,链码会执行指定的函数,可能修改对应的状态,并返回一个结果。

As described later, deploy transactions are special cases of invoke transactions, where a deploy transaction that creates new chaincode, corresponds to an invoke transaction on a system chaincode.

正如讨论的,部署交易是调用交易的一个特例,当一个部署交易创建新的链码,对应的是系统链 码的一次调用交易。

Remark: This document currently assumes that a transaction either creates new chaincode or invokes an operation provided by *one already deployed chaincode. This document does not yet describe: a) optimizations for query (read-only) transactions (included in v1), b) support for cross-chaincode transactions (post-v1 feature).*

备注: 这个文档一般假定一个交易都是从一个已经部署的链码上创建新的链码或者进行调用操作。该文档也不讨论 :a)一个查询(只读)交易的优化(包含版本v1),b)支持跨链交易(v1之后版本特性)

1.2. Blockchain datastructures – 区块链数据结构
1.2.1. State – 状态

The latest state of the blockchain (or, simply, state) is modeled as a versioned key/value store (KVS), where keys are names and values are arbitrary blobs. These entries are manipulated by the chaincodes (applications) running on the blockchain through put and get KVS-operations. The state is stored persistently and updates to the state are logged. Notice that versioned KVS is adopted as state model, an implementation may use actual KVSs, but also RDBMSs or any other solution.

区块链的最新状态(或简称为状态)建模为版本化的key/value存储(KVS),key是名字而values是 任意的文本。这些记录通过运行区块链上的 putget 的kvs操作来改变。这些状态被持久化 存储,更新操作记录于日志。注意版本化的KVS只是状态模型,而实际的实现可能是KVS存储,也 可能是RDBMSs,或者其他解决方案。

More formally, state s is modeled as an element of a mapping K -> (V X N), where:

  • K is a set of keys
  • V is a set of values
  • N is an infinite ordered set of version numbers. Injective function next: N -> N takes an element of N and returns the next version number.

正式地表达,状态 s 被建模为一个字典的元素 K -> (V X N)

  • K 是keys的集合
  • V 是values的集合
  • N 是排序的版本号的无限集合。单映射函数 next: N -> N ,根据元素 N 返回下一个版本号。

Both V and N contain a special element ⊥ (empty type), which is in case of N the lowest element. Initially all keys are mapped to (⊥, ⊥). For s(k)=(v,ver) we denote v by s(k).value, and ver by s(k).version.

VN 都包含了特殊的元素 ⊥ (空类型),这个元素是N最小的那个元素。初始化时所有的keys 都映射到(⊥, ⊥)。对于 s(k)=(v,ver) ,我们能推导出 v=s(k).value ,而 ver=s(k).version

KVS operations are modeled as follows:

  • put(k,v) for kK and vV, takes the blockchain state s and changes it to s' such that s'(k)=(v,next(s(k).version)) with s'(k')=s(k') for all k'!=k.
  • get(k) returns s(k).

KVS的操作如下建模:

  • put(k,v) 对于 kKvV,将blockchain的状态 s 改变为 s's'(k)=(v,next(s(k).version)) ,而对于其他 k'!=ks'(k')=s(k')
  • get(k) 返回 s(k).

State is maintained by peers, but not by orderers and clients.

状态由peer维护,排序节点和客户端不维护。

State partitioning. Keys in the KVS can be recognized from their name to belong to a particular chaincode, in the sense that only transaction of a certain chaincode may modify the keys belonging to this chaincode. In principle, any chaincode can read the keys belonging to other chaincodes. Support for cross-chaincode transactions, that modify the state belonging to two or more chaincodes is a post-v1 feature.

状态分区。 KVS中的keys通过所属链码的名称来识别,也意味着只有特定链码的交易能修改属于 这个链码的keys。原则上,任何链码都能读取属于其他链码的keys。在v1版本之后的特性中,支 持跨链交易,即修改属于两个以上链码的状态

1.2.2 Ledger – 账本

Ledger provides a verifiable history of all successful state changes (we talk about valid transactions) and unsuccessful attempts to change state (we talk about invalid transactions), occurring during the operation of the system.

账本提供了在系统操作过程中,所有成功的状态变化(我们讨论的是 有效 交易)和未成功的状态改 变尝试(我们讨论的是 无效 交易)可验证的历史记录。

Ledger is constructed by the ordering service (see Sec 1.3.3) as a totally ordered hashchain of blocks of (valid or invalid) transactions. The hashchain imposes the total order of blocks in a ledger and each block contains an array of totally ordered transactions. This imposes total order across all transactions.

账本作为一个完全排序好的包含交易(有效或者无效)信息的哈希链 区块 ,是由排序服务构建 的(参考章节1.3.3)。这个哈希链强制账本中的区块是完全有序的,区块中包含的交易列表也是 完全有序的。这使得所有的交易都是排序好的。

Ledger is kept at all peers and, optionally, at a subset of orderers. In the context of an orderer we refer to the Ledger as to OrdererLedger, whereas in the context of a peer we refer to the ledger as to PeerLedger. PeerLedger differs from the OrdererLedger in that peers locally maintain a bitmask that tells apart valid transactions from invalid ones (see Section XX for more details).

账本保存在所有的peer节点上,以及一部分排序节点上。在order节点上,我们将账本称为 OrdererLedger,在peer节点上我们称为 PeerLedgerPeerLedgerOrdererLedger 的区别 在于peer本地维护了一个bitmask来区分有效交易和无效交易(详情参看章节XX)。

Peers may prune PeerLedger as described in Section XX (post-v1 feature). Orderers maintain OrdererLedger for fault-tolerance and availability (of the PeerLedger) and may decide to prune it at anytime, provided that properties of the ordering service (see Sec. 1.3.3) are maintained.

Peer可能如章节XX(V1后特性)所说,会裁剪 PeerLedgerOrdererLedger 提供了排序服务需要 维护的一些特性(参考章节1.3.3),排序节点为了容错和可用性而维护 OrdererLedger ,但也可能随时裁剪它。

The ledger allows peers to replay the history of all transactions and to reconstruct the state. Therefore, state as described in Sec 1.2.1 is an optional datastructure.

账本允许节点重放所有交易的历史记录来构建状态。因此,章节1.2.1描述的状态是一个可选的数 据结构。

1.3. Nodes – 节点

Nodes are the communication entities of the blockchain. A “node” is only a logical function in the sense that multiple nodes of different types can run on the same physical server. What counts is how nodes are grouped in “trust domains” and associated to logical entities that control them.

节点是区块链中的通信实体。一个节点只是一个逻辑函数概念,实际上不同类型多种节点可以运 行在同一个物理服务器上。重要的是节点是如何在”信任域”内分组以及如何和控制他们的逻辑实 体关联的。

There are three types of nodes:

有三种类型的节点:

  1. Client or submitting-client: a client that submits an actual transaction-invocation to the endorsers, and broadcasts transaction-proposals to the ordering service.
  1. Client 或者 submitting-client :客户端实际提交交易调用给背书节点,然后广播交易提案给 排序服务。
  2. Peer: a node that commits transactions and maintains the state and a copy of the ledger (see Sec, 1.2). Besides, peers can have a special endorser role.
  1. Peer: 负责提交交易、通过账本拷贝来维护状态的节点。另外peer有一种特殊的 背书节点 的角色。
  2. Ordering-service-node or orderer: a node running the communication service that implements a delivery guarantee, such as atomic or total order broadcast.
  1. Ordering-service-node 或者 orderer: 通信系统中运行的实现了某种传递保证的节点,比如实 现原子或者完全有序的广播。

The types of nodes are explained next in more detail.

接下来将详细介绍各个类型的节点。

1.3.1. Client – 客户端

The client represents the entity that acts on behalf of an end-user. It must connect to a peer for communicating with the blockchain. The client may connect to any peer of its choice. Clients create and thereby invoke transactions.

客户端是代表了终端用户的实体。它必须通过连接一个peer来和区块链通信。客户端可以选择连 接任意peer。客户端创建交易请求,之后再通过链码调用。

As detailed in Section 2, clients communicate with both peers and the ordering service.

如在章节2中详细描述,客户端同时会和peer节点及排序服务通信。

1.3.2. Peer – Peer节点

A peer receives ordered state updates in the form of blocks from the ordering service and maintain the state and the ledger.

一个peer节点从排序服务接收到 区块 ,写入到账本,更新并维护状态。

Peers can additionally take up a special role of an endorsing peer, or an endorser. The special function of an endorsing peer occurs with respect to a particular chaincode and consists in endorsing a transaction before it is committed. Every chaincode may specify an endorsement policy that may refer to a set of endorsing peers. The policy defines the necessary and sufficient conditions for a valid transaction endorsement (typically a set of endorsers’ signatures), as described later in Sections 2 and 3. In the special case of deploy transactions that install new chaincode the (deployment) endorsement policy is specified as an endorsement policy of the system chaincode.

Peer节点另外承担了 endorsing peer 或者 endorser 的角色。在一个交易被提交前,endorsing peer 的特殊功能对特定的链码就能派上用场。每个链码都会指定一个 背书策略 ,指向一系列的 背书节点。这个策略定义了一个有效背书(通常是一系列背书者的签名)的必要充足条件,将在 后续章节2和3详细讨论。一个特殊的例子是安装新的链码的部署交易,它的背书策略是系统链码 的背书策略。

1.3.3. Ordering service nodes (Orderers) – 排序服务节点(Orderers)

The orderers form the ordering service, i.e., a communication fabric that provides delivery guarantees. The ordering service can be implemented in different ways: ranging from a centralized service (used e.g., in development and testing) to distributed protocols that target different network and node fault models.

ordering service 简称 orderers ,一个提供传递保证的通信实体。排序服务可以通过不同的方式实 现:从集中式服务(用在开发和测试)到用在不同网络和节点故障模型中的分布式协议。

Ordering service provides a shared communication channel to clients and peers, offering a broadcast service for messages containing transactions. Clients connect to the channel and may broadcast messages on the channel which are then delivered to all peers. The channel supports atomic delivery of all messages, that is, message communication with total-order delivery and (implementation specific) reliability. In other words, the channel outputs the same messages to all connected peers and outputs them to all peers in the same logical order. This atomic communication guarantee is also called total-order broadcast, atomic broadcast, or consensus in the context of distributed systems. The communicated messages are the candidate transactions for inclusion in the blockchain state.

排序服务为peer节点和客户端提供了一个共享的 通信通道 ,用来广播包含交易的信息。客户端连 接到通道并在通道中广播消息,之后这些消息会传递给所有的peer节点。这个通道支持消息的 原子 传递,也即消息的传递是完全有序、可靠的(特殊实现)。换言之,通道对于所有的连接节点 输出相同逻辑顺序、完全相同的消息。这个原子通信保障也成为 完全有序广播原子广播 ,或 者分布式系统中的 一致性 。这个通信消息是将包含在区块状态的候选交易。

Partitioning (ordering service channels). Ordering service may support multiple channels similar to the topics of a publish/subscribe (pub/sub) messaging system. Clients can connect to a given channel and can then send messages and obtain the messages that arrive. Channels can be thought of as partitions - clients connecting to one channel are unaware of the existence of other channels, but clients may connect to multiple channels. Even though some ordering service implementations included with Hyperledger Fabric support multiple channels, for simplicity of presentation, in the rest of this document, we assume ordering service consists of a single channel/topic.

分区(排序服务通道)。 排序服务支持多 通道 ,就像发布/订阅系统中的 主题 一样。客户端连接到 给定的通道上,然后发送消息,接受到达的消息。通道可以想象为一个分区,客户端可以连接到 一个通道上而不感知其他通道的存在,当然客户端也可以连接到多个通道上。Hyperledger Fabric的有些排序服务实现支持多通道,但是为了简单演示,我们在接下来部分假设排序服务只 包含一个通道/主题。

Ordering service API. Peers connect to the channel provided by the ordering service, via the interface provided by the ordering service. The ordering service API consists of two basic operations (more generally asynchronous events):

排序服务API。 peer通过排序服务提供的接口连接到排序服务提供的通道上。排序服务API包含以 下两个基本操作(更一般称为 异步事件

TODO add the part of the API for fetching particular blocks under client/peer specified sequence numbers.

TODO 增加用client/peer获取特定序号区块的部分API。

  • broadcast(blob): a client calls this to broadcast an arbitrary message blob for dissemination over the channel. This is also called request(blob) in the BFT context, when sending a request to a service.
  • broadcast(blob):一个客户端调用该接口在通道内去广播任意消息 blob 。 当想一个服务发送一个请求时,在BFT里我们又称之为 request(blob)
  • deliver(seqno, prevhash, blob): the ordering service calls this on the peer to deliver the message blob with the specified non-negative integer sequence number (seqno) and hash of the most recently delivered blob (prevhash). In other words, it is an output event from the ordering service. deliver() is also sometimes called notify() in pub-sub systems or commit() in BFT systems.
  • deliver(seqno, prevhash, blob) : 排序服务指定一个非负序列号( seqno )和最近传递的 blob( prevhash 和本次传递的消息 blob 来调用该接口。换言之,这个排序服务的输出事件 。 deliver() 有时在发布/订阅系统中也称为 notify() ,在BFD系统中也称为 commit()

Ledger and block formation. The ledger (see also Sec. 1.2.2) contains all data output by the ordering service. In a nutshell, it is a sequence of deliver(seqno, prevhash, blob) events, which form a hash chain according to the computation of prevhash described before.

账本和区块格式。 账本(参考章节1.2.2)包含了排序服务的所有输出。概况地说,它是一系列 deliver(seqno, prevhash, blob) 事件,依据计算之前讨论的 prevhash 构建哈希链。

Most of the time, for efficiency reasons, instead of outputting individual transactions (blobs), the ordering service will group (batch) the blobs and output blocks within a single deliver event. In this case, the ordering service must impose and convey a deterministic ordering of the blobs within each block. The number of blobs in a block may be chosen dynamically by an ordering service implementation.

大部分情况下,考虑到效率,排序服务会给blobs打包然后输出 区块 到一个 deliver 事件中, 而不是输出一个个的交易(blobs)。这种情况下,排序服务必须强制每个区块中的blobs是确定有 序的。区块中blob的数量根据不同的排序服务的实现而动态选择。

In the following, for ease of presentation, we define ordering service properties (rest of this subsection) and explain the workflow of transaction endorsement (Section 2) assuming one blob per deliver event. These are easily extended to blocks, assuming that a deliver event for a block corresponds to a sequence of individual deliver events for each blob within a block, according to the above mentioned deterministic ordering of blobs within a blocs.

接下来为了便于演示,我们定义排序服务特性(文章的剩余章节)并且解释交易背书的流程(假设一个blob一 次 deliver 事件)。这些可以简单的扩展到区块,根据上述的区块中每个blobs确定规则 的前提,假设区块响应的一次 deliver 事件对应一系列的区块中每一个blob的单独 deliver 事 件。

Ordering service properties排序服务特性

The guarantees of the ordering service (or atomic-broadcast channel) stipulate what happens to a broadcasted message and what relations exist among delivered messages. These guarantees are as follows:

排序服务保证(通道内原子广播)规定了如何广播消息,以及传递的消息之间的关联性。有如下 保证:

  1. Safety (consistency guarantees): As long as peers are connected for sufficiently long periods of time to the channel (they can disconnect or crash, but will restart and reconnect), they will see an identical series of delivered (seqno, prevhash, blob) messages. This means the outputs (deliver() events) occur in the same order on all peers and according to sequence number and carry identical content (blob and prevhash) for the same sequence number. Note this is only a logical order, and a deliver(seqno, prevhash, blob) on one peer is not required to occur in any real-time relation to deliver(seqno, prevhash, blob) that outputs the same message at another peer. Put differently, given a particular seqno, no two correct peers deliver different prevhash or blob values. Moreover, no value blob is delivered unless some client (peer) actually called broadcast(blob) and, preferably, every broadcasted blob is only delivered once.

    Furthermore, the deliver() event contains the cryptographic hash of the data in the previous deliver() event (prevhash). When the ordering service implements atomic broadcast guarantees, prevhash is the cryptographic hash of the parameters from the deliver() event with sequence number seqno-1. This establishes a hash chain across deliver() events, which is used to help verify the integrity of the ordering service output, as discussed in Sections 4 and 5 later. In the special case of the first deliver() event, prevhash has a default value.

  1. 安全性(一致性保证): 只要peer节点和通道保持足够长的连接(它们可以断连或者奔溃,但 是能够重启和重连),它们最终看到一系列 一致 的传递的 (seqno, prevhash, blob) 消息。 这意味着在所有peer上的输出(deliver() 事件)是一样的顺序和序列号,同时内容也是完全 一致的。注意这里只是 逻辑顺序 ,在真实时间线上,peer节点上的 deliver(seqno, prevhash, blob) 消息并不需要和其他节点接受到的顺序完全相同。换句话 说,给定一个 seqno没有 两个正确的节点会传递 不同prevhash 或者 blob 值。没有一个值 blob 会被传递, 除非被一些客户端(peer)调用了 broadcast(blob) ,每个广播的blob最好只传递 一次

    更进一步, deliver() 事件包含之前 deliver() 事件的哈希值。 prevhash 的值是序列号为 seqno-1deliver() 事件的哈希值,这是排序服务实现原子广播的保证。这可以在不同的 deliver() 事件中建立一个链,哈希值可以用来验证排序服务输出的完整性,在之后章节4和 5中将详细讨论。特殊情况是第一个 deliver() 事件,它的 prevhash 有一个默认值。

  2. Liveness (delivery guarantee): Liveness guarantees of the ordering service are specified by a ordering service implementation. The exact guarantees may depend on the network and node fault model.

    In principle, if the submitting client does not fail, the ordering service should guarantee that every correct peer that connects to the ordering service eventually delivers every submitted transaction.

  1. 活性(传递保证):排序服务的活性保证根据排序服务的实现而不同。准确的保证和网络及节 点故障模型相关。

    原则上,如果客户端提交不失败,排序服务应该保证每个连接到排序服务的peer节点最终都能 收到每个提交的交易。

To summarize, the ordering service ensures the following properties:

总而言之,排序服务确保了以下的特性:

  • Agreement. For any two events at correct peers deliver(seqno, prevhash0, blob0) and deliver(seqno, prevhash1, blob1) with the same seqno, prevhash0==prevhash1 and blob0==blob1;
  • 一致性。 对于正确peer的两个事件, deliver(seqno, prevhash0, blob0)deliver(seqno, prevhash1, blob1) , 具有相同的序列号,有 prevhash0==prevhash1blob0==blob1
  • Hashchain integrity. For any two events at correct peers deliver(seqno-1, prevhash0, blob0) and deliver(seqno, prevhash, blob), prevhash = HASH(seqno-1||prevhash0||blob0).
  • 哈希链完整性。 对于两个peer节点的任意两个事件 deliver(seqno-1, prevhash0, blob0)deliver(seqno, prevhash, blob)prevhash = HASH(seqno-1||prevhash0||blob0)
  • No skipping. If an ordering service outputs deliver(seqno, prevhash, blob) at a correct peer p, such that seqno>0, then p already delivered an event deliver(seqno-1, prevhash0, blob0).
  • 无跳跃 。 如果排序服务给正确的peer p 节点输出 deliver(seqno, prevhash, blob) ,其中 seqno>0 ,那么 p 节点已经接收到事件 deliver(seqno-1, prevhash0, blob0)
  • No creation. Any event deliver(seqno, prevhash, blob) at a correct peer must be preceded by a broadcast(blob) event at some (possibly distinct) peer;
  • 无创建 。peer节点上的任何事件 deliver(seqno, prevhash, blob) 必须是同一个(可能其他)peer broadcast(blob) 的结果。
  • No duplication (optional, yet desirable). For any two events broadcast(blob) and broadcast(blob'), when two events deliver(seqno0, prevhash0, blob) and deliver(seqno1, prevhash1, blob') occur at correct peers and blob == blob', then seqno0==seqno1 and prevhash0==prevhash1.
  • 不重复性(可选,但是希望有) 。对于任意两个事件 broadcast(blob)broadcast(blob') ,正确peer 节点上的 deliver(seqno0, prevhash0, blob)deliver(seqno1, prevhash1, blob') ,有 blob == blob'seqno0==seqno1prevhash0==prevhash1
  • Liveness. If a correct client invokes an event broadcast(blob) then every correct peer “eventually” issues an event deliver(*, *, blob), where * denotes an arbitrary value.
  • 活性。当一个客户端正确地调用事件 broadcast(blob) ,每个正确的peer节点”最终”接受到事件 deliver(*, *, blob)* 表示任意值。

2. Basic workflow of transaction endorsement - 交易背书基本流程

In the following we outline the high-level request flow for a transaction.

接下来我们会从比较高层次大致演示一个交易的基本流程。

Remark: Notice that the following protocol *does not assume that all transactions are deterministic, i.e., it allows for non-deterministic transactions.*

备注: 注意接下来的协议不假设所有的交易都是确定性的,它允许不确定性的交易

2.1. The client creates a transaction and sends it to endorsing peers of its choice – 客户端创建交易并发送到所选的背书节点

To invoke a transaction, the client sends a PROPOSE message to a set of endorsing peers of its choice (possibly not at the same time - see Sections 2.1.2. and 2.3.). The set of endorsing peers for a given chaincodeID is made available to client via peer, which in turn knows the set of endorsing peers from endorsement policy (see Section 3). For example, the transaction could be sent to all endorsers of a given chaincodeID. That said, some endorsers could be offline, others may object and choose not to endorse the transaction. The submitting client tries to satisfy the policy expression with the endorsers available.

为了调用交易接口,客户端发送一个 提案 消息到它选择的背书节点集(也许不是同时进行,参 考章节2.1.2和2.1.3)。peer节点通过背书策略(参考章节3)知道背书节点集合,客户端通过连 接peer来获取特定 链码ID 的背书节点集。比如,给定 链码ID 的交易可以发送到 所有的 背书节 点。有些背书节点离线,另外一些可能拒绝或者不对交易进行背书。提交客户端通过那些可用的 背书节点尝试去满足策略描述的规则。

In the following, we first detail PROPOSE message format and then discuss possible patterns of interaction between submitting client and endorsers.

接下来,我们首先介绍下 提案 的格式和详情,然后讨论提交客户端和背书节点间可能的交互模式。

2.1.1. PROPOSE message format – 提案 消息格式

The format of a PROPOSE message is <PROPOSE,tx,[anchor]>, where tx is a mandatory and anchor optional argument explained in the following.

提案 消息的格式是:<PROPOSE,tx,[anchor]>tx 是必须有的,anchor 是一个可选参数, 解释如下。

  • tx=<clientID,chaincodeID,txPayload,timestamp,clientSig>, where

    • clientID is an ID of the submitting client,
    • chaincodeID refers to the chaincode to which the transaction pertains,
    • txPayload is the payload containing the submitted transaction itself,
    • timestamp is a monotonically increasing (for every new transaction) integer maintained by the client,
    • clientSig is signature of a client on other fields of tx.
    • clientID 是提交客户端的ID,
    • chaincodeID 指向交易所属的链码,
    • txPayload 负载包含了提交的交易内容本身,
    • timestamp 由客户端维护的一个单调递增(对每个交易)的整数,
    • clientSig 是客户端对 tx 之外的其他属性的签名。

    The details of txPayload will differ between invoke transactions and deploy transactions (i.e., invoke transactions referring to a deploy-specific system chaincode). For an invoke transaction, txPayload would consist of two fields

    txPayload 的详情根据是调用交易还是部署交易有所不同(比如调用交易指向一个部署类型的系统链码)。 对于 调用交易txPayload 会包含以下两个属性:

    • txPayload = <operation, metadata>, where
      • operation denotes the chaincode operation (function) and arguments,
      • metadata denotes attributes related to the invocation.
    • txPayload = <operation, metadata> , 其中
      • operation 表示链码的操作(函数)和参数。
      • metadata 表示和调用相关的属性。

    For a deploy transaction, txPayload would consist of three fields

    对于 部署交易txPayload 会包含以下三个属性

    • txPayload = <source, metadata, policies>, where
      • source denotes the source code of the chaincode,
      • metadata denotes attributes related to the chaincode and application,
      • policies contains policies related to the chaincode that are accessible to all peers, such as the endorsement policy. Note that endorsement policies are not supplied with txPayload in a deploy transaction, but txPayload of a deploy contains endorsement policy ID and its parameters (see Section 3).
    • txPayload = <source, metadata, policies>, 其中
      • source 表示链码的源码位置,
      • metadata 表示和链码和应用相关的属性,
      • policies 包含和链码相关的策略,对所有peer节点可见,比如背书策略。注意在部署 交易中背书策略不包含 txPayload ,但是 deploytxPayload 包含背书策略ID和他 的参数(参考章节3)。
  • anchor contains read version dependencies, or more specifically, key-version pairs (i.e., anchor is a subset of KxN), that binds or “anchors” the PROPOSE request to specified versions of keys in a KVS (see Section 1.2.). If the client specifies the anchor argument, an endorser endorses a transaction only upon read version numbers of corresponding keys in its local KVS match anchor (see Section 2.2. for more details).

  • anchor 包含 读版本依赖 ,或者准确地说,key-version对(比如,anchorKxN 的子集)。 这样就绑定或者锚定了 提案 请求和KVS中特定版本的keys(参考章节1.2)。如果客户端指定 了 anchor 参数,背书节点只有在锚的 版本号和本地的KVS匹配上时,才会背书一个交易。

Cryptographic hash of tx is used by all nodes as a unique transaction identifier tid (i.e., tid=HASH(tx)). The client stores tid in memory and waits for responses from endorsing peers.

tx 的哈希值被所用节点用作交易的身份表示 tid (比如 tid=HASH(tx))。客户端在内存中存 储了 tid ,并且等待背书节点的响应。

2.1.2. Message patterns – 消息模式

The client decides on the sequence of interaction with endorsers. For example, a client would typically send <PROPOSE, tx> (i.e., without the anchor argument) to a single endorser, which would then produce the version dependencies (anchor) which the client can later on use as an argument of its PROPOSE message to other endorsers. As another example, the client could directly send <PROPOSE, tx> (without anchor) to all endorsers of its choice. Different patterns of communication are possible and client is free to decide on those (see also Section 2.3.).

客户端决定和背书节点进行一系列的交互。比如,典型地一个客户端发送 <PROPOSE, tx> (比如没有带 anchor 参数)给一个单独的背书节点,然后会产生版本依赖 ( anchor ),客户端之后用来添加到 提案 消息中,然后发送给其他的背书节点。另外一种例 子,背书节点直接发送 <PROPOSE, tx> (不带有 anchor )给所有其他它选择的节点。不同的通 信模式都是许可的,客户端可以自行决定(参考章节2.3)。

2.2. The endorsing peer simulates a transaction and produces an endorsement signature – 背书节点模拟交易并添加背书签名

On reception of a <PROPOSE,tx,[anchor]> message from a client, the endorsing peer epID first verifies the client’s signature clientSig and then simulates a transaction. If the client specifies anchor then endorsing peer simulates the transactions only upon read version numbers (i.e., readset as defined below) of corresponding keys in its local KVS match those version numbers specified by anchor.

从客户端接受到一个 <PROPOSE,tx,[anchor]> 消息,背书节点 epID 首先验证客户端的签名 clientSig ,然后模拟交易。如果客户端指定了 anchor ,背书节点只需要通过读 anchor 的 keys的版本号是否和本地的KVS的版本号匹配,

Simulating a transaction involves endorsing peer tentatively executing a transaction (txPayload), by invoking the chaincode to which the transaction refers (chaincodeID) and the copy of the state that the endorsing peer locally holds.

通过调用交易中指定的 链码Id ,引用背书节点本地的状态拷贝,可以实验性地在背书节点上 执行 交易( txPayload ),

As a result of the execution, the endorsing peer computes read version dependencies (readset) and state updates (writeset), also called MVCC+postimage info in DB language.

执行的结果是背书节点计算出 读版本依赖读集 )和 状态更新写集 ),在db语言中也称为 MVCC+psotimage info

Recall that the state consists of key/value (k/v) pairs. All k/v entries are versioned, that is, every entry contains ordered version information, which is incremented every time when the value stored under a key is updated. The peer that interprets the transaction records all k/v pairs accessed by the chaincode, either for reading or for writing, but the peer does not yet update its state. More specifically:

曾提过状态包括key/value(k/v)对。所有的k/v对都是版本化的,即每个记录都是包含有序的版 本信息的,每次key对应存储的value值更新,都会递增key的版本号。无论是读还是写,peer节 点把交易的条目都翻译成链码能访问的k/v对,但是没有更新peer的状态。更具体的:

  • Given state s before an endorsing peer executes a transaction, for every key k read by the transaction, pair (k,s(k).version) is added to readset.
  • 给定背书节点执行交易前的状态 s ,对于每个交易中读取的key k(k,s(k).version) 会 添加到 读集 中。
  • Additionally, for every key k modified by the transaction to the new value v', pair (k,v') is added to writeset. Alternatively, v' could be the delta of the new value to previous value (s(k).value).
  • 另外,对于交易中每个key k 修改成新的值 v' , (k,v') 对添加到 写集。或者,v' 可以 是原来value ( s(k).value )的差异值。

If a client specifies anchor in the PROPOSE message then client specified anchor must equal readset produced by endorsing peer when simulating the transaction.

如果一个客户端在 提案 消息中指定了 anchor ,name客户端指定的 anchor 必须和背书节点模 拟交易中产生的 读集 相等。

Then, the peer forwards internally tran-proposal (and possibly tx) to the part of its (peer’s) logic that endorses a transaction, referred to as endorsing logic. By default, endorsing logic at a peer accepts the tran-proposal and simply signs the tran-proposal. However, endorsing logic may interpret arbitrary functionality, to, e.g., interact with legacy systems with tran-proposal and tx as inputs to reach the decision whether to endorse a transaction or not.

然后,peer节点转发内部的 交易提案 (也叫 tx )到节点的其他逻辑去背书交易,称为 背书逻 辑 。默认情况下,peer节点的背书逻辑接受 交易提案 然后简单签了名。然而,背书逻辑可能执 行任意功能,比如把 交易提案tx )作为输入和账本系统进行交互,决定是否背书这个交易 等。

If endorsing logic decides to endorse a transaction, it sends <TRANSACTION-ENDORSED, tid, tran-proposal,epSig> message to the submitting client(tx.clientID), where:

如果背书逻辑决定背书这个交易,会发送 <TRANSACTION-ENDORSED, tid, tran-proposal,epSig> 消 息到提交客户端( tx.clientID ),其中:

  • tran-proposal := (epID,tid,chaincodeID,txContentBlob,readset,writeset),

    where txContentBlob is chaincode/transaction specific information. The intention is to have txContentBlob used as some representation of tx (e.g., txContentBlob=tx.txPayload).

  • tran-proposal := (epID,tid,chaincodeID,txContentBlob,readset,writeset) , 其中 txContentBlob 是链码/交易特有的信息。它的目的是用来表示 tx (比如 xContentBlob=tx.txPayload )。

  • epSig is the endorsing peer’s signature on tran-proposal

  • epSig 是背书节点对 tran-proposal 的签名。

Else, in case the endorsing logic refuses to endorse the transaction, an endorser may send a message (TRANSACTION-INVALID, tid, REJECTED) to the submitting client.

否则,如果背书逻辑拒绝背书这个交易,背书节点 也许 会发送一个 (TRANSACTION-INVALID, tid, REJECTED) 消息到提交客户端。

Notice that an endorser does not change its state in this step, the updates produced by transaction simulation in the context of endorsement do not affect the state!

注意背书节点在这个步骤不会修改它的状态,背书节点中的模拟交易更新不会影响状态。

2.3. The submitting client collects an endorsement for a transaction and broadcasts it through ordering service –提交客户端收集交易背书并广播给排序服务

The submitting client waits until it receives “enough” messages and signatures on (TRANSACTION-ENDORSED, tid, *, *) statements to conclude that the transaction proposal is endorsed. As discussed in Section 2.1.2., this may involve one or more round-trips of interaction with endorsers.

提交客户端会等待收集到足够多的消息,然后在 (TRANSACTION-ENDORSED, tid, *, *) 签名来决 断交易提案已经被背书了。在章节2.1.2中讨论过,这里可能会和背书节点有多个来回的调用。

The exact number of “enough” depend on the chaincode endorsement policy (see also Section 3). If the endorsement policy is satisfied, the transaction has been endorsed; note that it is not yet committed. The collection of signed TRANSACTION-ENDORSED messages from endorsing peers which establish that a transaction is endorsed is called an endorsement and denoted by endorsement.

“足够”多的具体数目依赖链码的背书策略(参考章节3)。如果背书策略被满足,那么交易完成 背 书 ;注意此时还未提交。由背书节点签名的 TRANSACTION-ENDORSED 消息集合共同建立交易被签 名的事实,称之为 endorsement

If the submitting client does not manage to collect an endorsement for a transaction proposal, it abandons this transaction with an option to retry later.

如果提交客户端没有为一个交易提案达成收集背书,它会放弃这次交易并根据配置再重试。

For transaction with a valid endorsement, we now start using the ordering service. The submitting client invokes ordering service using the broadcast(blob), where blob=endorsement. If the client does not have capability of invoking ordering service directly, it may proxy its broadcast through some peer of its choice. Such a peer must be trusted by the client not to remove any message from the endorsement or otherwise the transaction may be deemed invalid. Notice that, however, a proxy peer may not fabricate a valid endorsement.

对于有有效背书的交易,我们开始使用排序服务。提交客户端使用 broadcast(blob) 接口调用排 序服务,其中 blob=endorsement 。如果客户端没法直接调用排序服务,它可以选择一些peer节 点作为代理来广播。这样一个peer节点必须被客户端信任不会去移除 endorsement 中的任何信 息,否则交易会被认作无效。注意,无论如何代理节点是无法伪造一个有效的 endorsement

2.4. The ordering service delivers a transactions to the peers – 排序服务递送交易到peer节点

When an event deliver(seqno, prevhash, blob) occurs and a peer has applied all state updates for blobs with sequence number lower than seqno, a peer does the following:

当peer节点接收到一个 deliver(seqno, prevhash, blob) 事件,并且已经把 seqno 之前的 所有状态持久化,它会做如下处理:

  • It checks that the blob.endorsement is valid according to the policy of the chaincode (blob.tran-proposal.chaincodeID) to which it refers.
  • 它根据 blob.tran-proposal.chaincodeID 指定的链码的策略,检查 blob.endorsement 是否有 效。
  • In a typical case, it also verifies that the dependencies (blob.endorsement.tran-proposal.readset) have not been violated meanwhile. In more complex use cases, tran-proposal fields in endorsement may differ and in this case endorsement policy (Section 3) specifies how the state evolves.
  • 一个经典的场景,它还校验依赖( blob.endorsement.tran-proposal.readset )没有被违反。 更复杂的应用场景,背书中的 tran-proposal 属性可能不同,背书策略(章节3)指定了状态 如何演进。

Verification of dependencies can be implemented in different ways, according to a consistency property or “isolation guarantee” that is chosen for the state updates. Serializability is a default isolation guarantee, unless chaincode endorsement policy specifies a different one. Serializability can be provided by requiring the version associated with every key in the readset to be equal to that key’s version in the state, and rejecting transactions that do not satisfy this requirement.

根据一致性特性或者”隔离保证”,依赖校验可以通过不同的方式来实现。除非链码背书策略指定 了其他保证,否则 串行化 是一个默认的隔离保证。串行化会保证 读集 中的每个key的版本和状 态中的key版本相等,并且拒绝不满足这个条件的交易。

  • If all these checks pass, the transaction is deemed valid or committed. In this case, the peer marks the transaction with 1 in the bitmask of the PeerLedger, applies blob.endorsement.tran-proposal.writeset to blockchain state (if tran-proposals are the same, otherwise endorsement policy logic defines the function that takes blob.endorsement).
  • 如果通过所有的检查,交易就是一定 有效 或者一定 被提交 。这种情况下,peer节点会在 PeerLedger 的bitmask上为这个交易标注为1,将 blob.endorsement.tran-proposal.writeset 写入区块状态( 如果 tran-proposals 是一样的,否则背书策略逻辑定义函数来处理 blob.endorsement )。
  • If the endorsement policy verification of blob.endorsement fails, the transaction is invalid and the peer marks the transaction with 0 in the bitmask of the PeerLedger. It is important to note that invalid transactions do not change the state.
  • 如果 blob.endorsement 背书策略校验失败了,peer节点会在 PeerLedger 的bitmask上为这个交易标注为0。需要着重注意的是无效的交易不会改变状态。

Note that this is sufficient to have all (correct) peers have the same state after processing a deliver event (block) with a given sequence number. Namely, by the guarantees of the ordering service, all correct peers will receive an identical sequence of deliver(seqno, prevhash, blob) events. As the evaluation of the endorsement policy and evaluation of version dependencies in readset are deterministic, all correct peers will also come to the same conclusion whether a transaction contained in a blob is valid. Hence, all peers commit and apply the same sequence of transactions and update their state in the same way.

注意,以上就是peer在处理带有序号的传递的事件(区块)后,所有peer节点都有相同的状态的 充分条件。通过排序服务的担保,所有正确的peer节点互收到一致的 deliver(seqno, prevhash, blob) 事件序列。因为背书策略的评估和 读集 版本依赖的评估都是 确定的,所有peer节点都会达成blob中的交易是否有效的共识。因此,所有的peer节点提交和应 用相同的交易顺序,用同样的方式更新它们的状态。

Illustration of the transaction flow (common-case path).

Figure 1. Illustration of one possible transaction flow (common-case path).

3. Endorsement policies – 背书策略

3.1. Endorsement policy specification – 背书策略详述

An endorsement policy, is a condition on what endorses a transaction. Blockchain peers have a pre-specified set of endorsement policies, which are referenced by a deploy transaction that installs specific chaincode. Endorsement policies can be parametrized, and these parameters can be specified by a deploy transaction.

一个 背书策略 是一个谁来 背书 交易的条件。区块peer节点有一个预先指定的背书策略集合, 当通过部署交易来安装指定链码时就会指定。背书策略可以用参数表示,而这些参数可以通过部署 交易来指定。

To guarantee blockchain and security properties, the set of endorsement policies should be a set of proven policies with limited set of functions in order to ensure bounded execution time (termination), determinism, performance and security guarantees.

为了保证区块链安全特性,背书策略 应该是被证明的策略集合 ,有限的几个功能用来确保有限执 行时间(终止),确定性,性能和安全保障。

Dynamic addition of endorsement policies (e.g., by deploy transaction on chaincode deploy time) is very sensitive in terms of bounded policy evaluation time (termination), determinism, performance and security guarantees. Therefore, dynamic addition of endorsement policies is not allowed, but can be supported in future.

动态添加背书策略(如通过 部署 交易控制链码部署时间)在约束策略执行时间(终止),确定性, 性能和安全保障上非常敏感。因此,动态添加背书策略是不允许的,但未来可能支持。

3.2. Transaction evaluation against endorsement policy – 针对背书策略评估交易

A transaction is declared valid only if it has been endorsed according to the policy. An invoke transaction for a chaincode will first have to obtain an endorsement that satisfies the chaincode’s policy or it will not be committed. This takes place through the interaction between the submitting client and endorsing peers as explained in Section 2.

一个交易只有按照策略来背书才会被声明为有效。一个链码的调用交易首先会获取满足链码策略 的 背书 ,否则它不会被提交。如章节2所述,这个步骤发生在客户端和背书节点之间。

Formally the endorsement policy is a predicate on the endorsement, and potentially further state that evaluates to TRUE or FALSE. For deploy transactions the endorsement is obtained according to a system-wide policy (for example, from the system chaincode).

背书策略正式上说是对背书的述词,更进一步描述为对背书评估为TRUE或者FALSE。对于部署交易, 背书是从系统层面的策略获取的(比如,从系统链码)。

An endorsement policy predicate refers to certain variables. Potentially it may refer to:

一个背书策略述词可能引用特定的变量。它可能引用:

  1. keys or identities relating to the chaincode (found in the metadata of the chaincode), for example, a set of endorsers;
  2. further metadata of the chaincode;
  3. elements of the endorsement and endorsement.tran-proposal;
  4. and potentially more.
  1. keys或者和链码相关的身份(从链码的metadata中获取),比如背书节点集合。
  2. 链码的其他metadata数据。
  3. endorsementendorsement.tran-proposal 子元素
  4. 其他。

The above list is ordered by increasing expressiveness and complexity, that is, it will be relatively simple to support policies that only refer to keys and identities of nodes.

上面的列表是以表达能力和复杂度排序的,意味着,仅仅依赖节点的keys和身份的策略会相对容 易地支持。

The evaluation of an endorsement policy predicate must be deterministic. An endorsement shall be evaluated locally by every peer such that a peer does not need to interact with other peers, yet all correct peers evaluate the endorsement policy in the same way.

背书策略的述词评估必须是确定的。 每次背书都是由每个peer节点本地执行的,因此peer节点 需要和其他peer节点交互,所有正确的节点用相同的方式评估背书策略。

3.3. Example endorsement policies – 背书策略例子

The predicate may contain logical expressions and evaluates to TRUE or FALSE. Typically the condition will use digital signatures on the transaction invocation issued by endorsing peers for the chaincode.

述词可能包含逻辑描述,并评估为TURE或者FALSE。通常情况下,条件会使用这个链码的背书节 点为这个交易调用做的数字签名。

Suppose the chaincode specifies the endorser set E = {Alice, Bob, Charlie, Dave, Eve, Frank, George}. Some example policies:

假设链码指定背书节点集 E = {Alice, Bob, Charlie, Dave, Eve, Frank, George}。样例策略:

  • A valid signature from on the same tran-proposal from all members of E.
  • Ede所有成员节点对同一个 tran-proposal 的有效签名。
  • A valid signature from any single member of E.
  • E的任意一个成员的有效签名。
  • Valid signatures on the same tran-proposal from endorsing peers according to the condition (Alice OR Bob) AND (any two of: Charlie, Dave, Eve, Frank, George).
  • 背书节点根据条件对同一个 tran-proposal 的有效签名。 (Alice OR Bob) AND (any two of: Charlie, Dave, Eve, Frank, George)
  • Valid signatures on the same tran-proposal by any 5 out of the 7 endorsers. (More generally, for chaincode with n > 3f endorsers, valid signatures by any 2f+1 out of the n endorsers, or by any group of more than (n+f)/2 endorsers.)
  • 7个背书节点中5个对同一个 tran-proposal 的有效签名。(更通用的,对于有 n > 3f 背书 节点的链码,需要 n 个背书节点中任意 2f+1 个提供签名,或者多于任意 (n+f)/2 个背书 节点的签名。)
  • Suppose there is an assignment of “stake” or “weights” to the endorsers, like {Alice=49, Bob=15, Charlie=15, Dave=10, Eve=7, Frank=3, George=1}, where the total stake is 100: The policy requires valid signatures from a set that has a majority of the stake (i.e., a group with combined stake strictly more than 50), such as {Alice, X} with any X different from George, or {everyone together except Alice}. And so on.
  • 假设对背书节点分配”股份”或者”权重”,像 {Alice=49, Bob=15, Charlie=15, Dave=10, Eve=7, Frank=3, George=1} ,总的股份是100:这个 策略需要集合中大部分股份的签名(比如总和股份超过50),比如 {Alice, X} ,其中 X 和George不同。
  • The assignment of stake in the previous example condition could be static (fixed in the metadata of the chaincode) or dynamic (e.g., dependent on the state of the chaincode and be modified during the execution).
  • 之前样例条件中,股份的分配是静态的(在链码的metadata中是固定的)或者动态依赖于链 码的状态并在执行过程中可以被修改。
  • Valid signatures from (Alice OR Bob) on tran-proposal1 and valid signatures from (any two of: Charlie, Dave, Eve, Frank, George) on tran-proposal2 , where tran-proposal1 and tran-proposal2 differ only in their endorsing peers and state updates.
  • 从(Alice OR Bob)对 tran-proposal1 的有效签名,和从 (any two of: Charlie, Dave, Eve, Frank, George) ,对 tran-proposal2 的有效签名, tran-proposal1tran-proposal2 的区别只在于他们的背书节点和状态更新。

How useful these policies are will depend on the application, on the desired resilience of the solution against failures or misbehavior of endorsers, and on various other properties.

这些策略的作用依赖你对于应用的系统弹性需求,如解决背书节点失效或者恶意行为,和其他多样的特性。

4 (post-v1). Validated ledger and PeerLedger checkpointing (pruning) – (v1后版本)。有效账本和 PeerLedger 检查点(裁剪)

4.1. Validated ledger (VLedger) – 有效账本(VLedger)

To maintain the abstraction of a ledger that contains only valid and committed transactions (that appears in Bitcoin, for example), peers may, in addition to state and Ledger, maintain the Validated Ledger (or VLedger). This is a hash chain derived from the ledger by filtering out invalid transactions.

为了维护账本中提交的且有效的交易(比如,在Bitcoin中)的摘要,peer节点可能会额外记账, 维护 有效账本(或VLedger) 。这是一个继承自来账本过滤掉无效交易的哈希链。

The construction of the VLedger blocks (called here vBlocks) proceeds as follows. As the PeerLedger blocks may contain invalid transactions (i.e., transactions with invalid endorsement or with invalid version dependencies), such transactions are filtered out by peers before a transaction from a block becomes added to a vBlock. Every peer does this by itself (e.g., by using the bitmask associated with PeerLedger). A vBlock is defined as a block without the invalid transactions, that have been filtered out. Such vBlocks are inherently dynamic in size and may be empty. An illustration of vBlock construction is given in the figure below.

构建VLedger区块(这里称为 vBlocks )的过程如下。由于 PeerLedger 区块可能包含无效的交易 (比如,无效背书的交易或者无效版本依赖的交易),peer节点会在交易从区块调到到vBlock 前,将这些交易过滤掉。每个节点都会自行处理(比如,通过使用和 PeerLedger 关联的 bitmask)。vBlock就是过滤掉掉无效交易的区块。因此vBlocks在大小上是内在动态变化的,并 且有可能是空的。下图给出了vBlock的结构说明。

Illustration of vBlock formation -- vBlock格式说明

Figure 2. Illustration of validated ledger block (vBlock) formation from ledger (PeerLedger) blocks. – 从PeerLedger区块构建的有效账本区块(vBlock)的说明

vBlocks are chained together to a hash chain by every peer. More specifically, every block of a validated ledger contains:

每个peer节点将vBlock链接成一个哈希链。更确切的说,每个包含有效账本的区块包含:

  • The hash of the previous vBlock.
  • vBlock number.
  • An ordered list of all valid transactions committed by the peers since the last vBlock was computed (i.e., list of valid transactions in a corresponding block).
  • The hash of the corresponding block (in PeerLedger) from which the current vBlock is derived.
  • 前一个vBlock的哈希值。
  • vBlock号
  • 在上一个vBlock计算之后,所有提交的有效的交易的排序列表(比如,在对应区块里的有效交易列表)。
  • 当前vBlock继承的对应的区块(在 PeerLedger 中)的哈希值。

All this information is concatenated and hashed by a peer, producing the hash of the vBlock in the validated ledger.

所有这些信息都由peer节点进行连接和哈希计算,算出有效账本中vBlock的哈希值。

4.2. PeerLedger Checkpointing – PeerLedger 检查点

The ledger contains invalid transactions, which may not necessarily be recorded forever. However, peers cannot simply discard PeerLedger blocks and thereby prune PeerLedger once they establish the corresponding vBlocks. Namely, in this case, if a new peer joins the network, other peers could not transfer the discarded blocks (pertaining to PeerLedger) to the joining peer, nor convince the joining peer of the validity of their vBlocks.

账本可能包含永远都不再需要记录的无效交易。然而,一旦建立了对应的vBlock peer节点就不能 简单丢弃 PeerLedger 区块来进行裁剪。意味着这种情况下,如果一个新的peer节点加入网络, 其他peer节点不能传输丢弃的区块(附属在 PeerLedger )给加入的peer节点,也不能使加入的 节点信服它们的vBlocks.

To facilitate pruning of the PeerLedger, this document describes a checkpointing mechanism. This mechanism establishes the validity of the vBlocks across the peer network and allows checkpointed vBlocks to replace the discarded PeerLedger blocks. This, in turn, reduces storage space, as there is no need to store invalid transactions. It also reduces the work to reconstruct the state for new peers that join the network (as they do not need to establish validity of individual transactions when reconstructing the state by replaying PeerLedger, but may simply replay the state updates contained in the validated ledger).

为了便于裁剪 PeerLedger ,本文描述一种 检查点 的机制。这个机制建立了跨peer节点网络间的 vBlocks的可验证性,允许检查点vBlocks来代替丢弃的 PeerLedger 区块。这样,可以减少存储 空间,因为已经没有必要去存储无效交易。这也减少了新加入网络的peer节点重建状态的工作 (因为它们重放 PeerLedger 时,不必单个交易地去进行验证,而只要简单重放在有效账本中的 状态更新)。

4.2.1. Checkpointing protocol – 检查点协议

Checkpointing is performed periodically by the peers every CHK blocks, where CHK is a configurable parameter. To initiate a checkpoint, the peers broadcast (e.g., gossip) to other peers message <CHECKPOINT,blocknohash,blockno,stateHash,peerSig>, where blockno is the current blocknumber and blocknohash is its respective hash, stateHash is the hash of the latest state (produced by e.g., a Merkle hash) upon validation of block blockno and peerSig is peer’s signature on (CHECKPOINT,blocknohash,blockno,stateHash), referring to the validated ledger.

peer节点周期性间隔 CHK 个区块执行检查点,CHK 是一个可配置参数。为了初始化检查点, peer广播(比如,gossip)消息 <CHECKPOINT,blocknohash,blockno,stateHash,peerSig> 给其他 peer节点,其中 blockno 是当前的区块号码,blocknohash 是对应的哈希值,stateHash 是最 新状态的哈希值(比如,计算默克尔哈希),来校验 blockno 的区块中 peerSig 是peer节点对 (CHECKPOINT,blocknohash,blockno,stateHash) 的签名。

A peer collects CHECKPOINT messages until it obtains enough correctly signed messages with matching blockno, blocknohash and stateHash to establish a valid checkpoint (see Section 4.2.2.).

一个节点收集足够多正确签名且 blockno , blocknohashstateHash 字段匹配的信息来建立 一个 有效检查点 (参考章节4.2.2)。

Upon establishing a valid checkpoint for block number blockno with blocknohash, a peer:

在为 blockno 区块号、 blocknohash 哈希值得区块建立有效检查点基础上,一个peer节点:

  • if blockno>latestValidCheckpoint.blockno, then a peer assigns latestValidCheckpoint=(blocknohash,blockno),
  • stores the set of respective peer signatures that constitute a valid checkpoint into the set latestValidCheckpointProof,
  • stores the state corresponding to stateHash to latestValidCheckpointedState,
  • (optionally) prunes its PeerLedger up to block number blockno (inclusive).
  • 如果 blockno>latestValidCheckpoint.blockno ,那么peer节点赋值 latestValidCheckpoint=(blocknohash,blockno)
  • 将构成一个有效检查点的相应peer节点的签名存储到 latestValidCheckpointProof
  • 将对于的状态相应 stateHash 存储到 latestValidCheckpointedState
  • (可选的)根据区块编号 blockno (包含)裁剪它的 PeerLedger
4.2.2. Valid checkpoints – 有效检查点

Clearly, the checkpointing protocol raises the following questions: When can a peer prune its ``PeerLedger``? How many ``CHECKPOINT`` messages are “sufficiently many”?. This is defined by a checkpoint validity policy, with (at least) two possible approaches, which may also be combined:

显然,检查点协议引入以下问题: 什么时候一个peer节点可以裁剪它的 PeerLedger ?多少个 CHECKPOINT 才算足够多。这是由 插件点验证策略 指定的,有两种(至少)渠道,也可能混合使 用:

  • Local (peer-specific) checkpoint validity policy (LCVP). A local policy at a given peer p may specify a set of peers which peer p trusts and whose CHECKPOINT messages are sufficient to establish a valid checkpoint. For example, LCVP at peer Alice may define that Alice needs to receive CHECKPOINT message from Bob, or from both Charlie and Dave.
  • 本地(peer指定)检查点验证策略(LCVP)。 一个给定peer节点 p 的本地策略可能指定 p 信任 的peer节点集合和哪些 CHECKPOINT 信息足够建立一个有效检查点。比如,Alice peer 节点上 的LCVP指定需要接受来自Bob或者同时来自 CharlieDaveCHECKPOINT 消息。
  • Global checkpoint validity policy (GCVP). A checkpoint validity policy may be specified globally. This is similar to a local peer policy, except that it is stipulated at the system (blockchain) granularity, rather than peer granularity. For instance, GCVP may specify that:
    • each peer may trust a checkpoint if confirmed by 11 different peers.
    • in a specific deployment in which every orderer is collocated with a peer in the same machine (i.e., trust domain) and where up to f orderers may be (Byzantine) faulty, each peer may trust a checkpoint if confirmed by f+1 different peers collocated with orderers.
  • 全局检查点验证策略(GCVP)。 可以全局指定一个检查点策略。它和本地策略类似,只是它 是通过系统(区块链)粒度去操作的,而非peer粒度。比如,GCVP可能指定:
    • 每个peer如果得到 11 个不同peer节点的确认,则可以信任一个检查点。
    • 在这种特定部署情况下:每个排序节点都和一个peer节点部署在同一个机器上(比如信任 域),取决于 f 个可能有拜占庭缺陷的排序节点,如果被 f+1 个和排序节点并列的peer节 点确认,每个节点可以信任一个检查点。

Transaction Flow

交易流程

This document outlines the transactional mechanics that take place during a standard asset exchange. The scenario includes two clients, A and B, who are buying and selling radishes. They each have a peer on the network through which they send their transactions and interact with the ledger.

本文概述了资产交易过程中的事务机制。该场景包含客户A和B,在进行萝卜买卖。他们各自有一个网络节点,通过节点他们发送交易并和账本进行交互。

_images/step0.png

Assumptions

假设

This flow assumes that a channel is set up and running. The application user has registered and enrolled with the organization’s certificate authority (CA) and received back necessary cryptographic material, which is used to authenticate to the network.

该流程假设通道已建立并正常运行。用户已注册并使用组织认证授权(CA)登记,同时获得必要的加密材料来进行网络验证。

The chaincode (containing a set of key value pairs representing the initial state of the radish market) is installed on the peers and instantiated on the channel. The chaincode contains logic defining a set of transaction instructions and the agreed upon price for a radish. An endorsement policy has also been set for this chaincode, stating that both peerA and peerB must endorse any transaction.

链码(包含一组代表萝卜市场初始状态的键值对)被安装在节点上并在通道上进行实例化。链码包含定义交易指令集合的逻辑和达成一致的萝卜价格。设置一项针对链码的背书策略,表明节点A和B都必须对任何交易进行背书。

_images/step1.png
  1. Client A initiates a transaction
  1. 客户A发起交易

What’s happening? - Client A is sending a request to purchase radishes. The request targets peerA and peerB, who are respectively representative of Client A and Client B. The endorsement policy states that both peers must endorse any transaction, therefore the request goes to peerA and peerB.

发生了什么?- 客户A发出萝卜购买请求。请求目标节点A和B,分别代表客户A和B。背书策略表明两个节点必须为任何交易进行背书,因而请求被发送到节点A和B。

Next, the transaction proposal is constructed. An application leveraging a supported SDK (Node, Java, Python) utilizes one of the available API’s which generates a transaction proposal. The proposal is a request to invoke a chaincode function so that data can be read and/or written to the ledger (i.e. write new key value pairs for the assets). The SDK serves as a shim to package the transaction proposal into the properly architected format (protocol buffer over gRPC) and takes the user’s cryptographic credentials to produce a unique signature for this transaction proposal.

接下来构建交易提案。一个以可用SDK(node, java, python)为支撑的应用利用有效的API来生成交易提案。这项提案作为调用链码功能的请求来完成数据到账本的读取和/或写入(即为资产写入新的键值对)。SDK有两个作用:把交易提案包装成合适架构格式的库(基于gRPC的协议缓冲);使用用户的加密证书来创建交易提案的唯一签名。

_images/step2.png
  1. Endorsing peers verify signature & execute the transaction
  1. 背书节点验证签名&执行交易

The endorsing peers verify (1) that the transaction proposal is well formed, (2) it has not been submitted already in the past (replay-attack protection), (3) the signature is valid (using MSP), and (4) that the submitter (Client A, in the example) is properly authorized to perform the proposed operation on that channel (namely, each endorsing peer ensures that the submitter satisfies the channel’s Writers policy). The endorsing peers take the transaction proposal inputs as arguments to the invoked chaincode’s function. The chaincode is then executed against the current state database to produce transaction results including a response value, read set, and write set. No updates are made to the ledger at this point. The set of these values, along with the endorsing peer’s signature is passed back as a “proposal response” to the SDK which parses the payload for the application to consume.

背书节点验证(1)交易提案是完整的,(2)在过去没有被提交过(重放攻击保护),(3)签名是有效的(使用MSP),并且(4)提交者(在本例中的客户A)得到正确的授权在通道中来执行提案(即,每一个背书节点保证提交者满足通道的“写”策略。背书节点以交易提案凭证为输入,基于当前状态的数据库执行来生成交易结果,输出包括反馈值、读取集合和写入集合。截止现在账本还未进行更新。这些值的集合,背书节点的签名以及是/否的背书声明一同作为“提案反馈”被传输回到SDK,SDK对应用消耗的载荷进行解析。

{The MSP is a peer component that allows them to verify transaction requests arriving from clients and to sign transaction results(endorsements). The Writing policy is defined at channel creation time, and determines which user is entitled to submit a transaction to that channel.}

{MSP是在节点组件允许节点验证客户的交易请求和签订交易结果(背书)。写策略在通道创建时定义,决定哪些用户有资格在通道中提交一个交易。}

_images/step3.png
  1. Proposal responses are inspected
  1. 审查提案反馈

The application verifies the endorsing peer signatures and compares the proposal responses to determine if the proposal responses are the same. If the chaincode only queried the ledger, the application would inspect the query response and would typically not submit the transaction to Ordering Service. If the client application intends to submit the transaction to Ordering Service to update the ledger, the application determines if the specified endorsement policy has been fulfilled before submitting (i.e. did peerA and peerB both endorse). The architecture is such that even if an application chooses not to inspect responses or otherwise forwards an unendorsed transaction, the endorsement policy will still be enforced by peers and upheld at the commit validation phase.

应用对背书节点签名进行验证,比较提案反馈来决定是否一致。如果链码只需要账本,应用将会对查询结果进行检查并且一般不会提交交易到排序服务。如果客户程序打算提交一个交易到排序服务来更新账本,程序会决定在提交前指定的背书策略是否被执行(即节点A和B都进行了背书)。这种架构可以保证即使一个应用选择不进行反馈审查或者转发了没有背书的交易,背书策略依然会被节点执行并在验证提交阶段维持。

_images/step4.png
  1. Client assembles endorsements into a transaction
  1. 客户组合交易背书

The application “broadcasts” the transaction proposal and response within a “transaction message” to the Ordering Service. The transaction will contain the read/write sets, the endorsing peers signatures and the Channel ID. The Ordering Service does not need to inspect the entire content of a transaction in order to perform its operation, it simply receives transactions from all channels in the network, orders them chronologically by channel, and creates blocks of transactions per channel.

应用对交易提案进行广播,以“交易信息”对排序服务实现反馈。交易包含读/写集合,背书节点签名和通道ID。排序服务不读取交易细节,只是从网络中所有通道接收交易,根据每个通道按时间顺序调用,创建每个通道的交易区块。

_images/step5.png
  1. Transaction is validated and committed
  1. 交易验证和提交

The blocks of transactions are “delivered” to all peers on the channel. The transactions within the block are validated to ensure endorsement policy is fulfilled and to ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. Transactions in the block are tagged as being valid or invalid.

交易区块被“发布”到通道中的所有节点。区块中的交易被验证来确保背书策略被执行并且账本的读取集合变量没有发生变化,因为读取集合是执行交易生成的。区块中的交易被标记为有效或无效。

_images/step6.png
  1. Ledger updated
  1. 账本更新

Each peer appends the block to the channel’s chain, and for each valid transaction the write sets are committed to current state database. An event is emitted, to notify the client application that the transaction (invocation) has been immutably appended to the chain, as well as notification of whether the transaction was validated or invalidated.

每个节点都把区块追加到通道的链中,对每项有效交易,写集合被提交到当前状态的数据库。发出一个事务通知客户端应用,交易(调用)被永久追加到链中,并且通知交易是有效或者无效的。

Note: See the swimlane diagram to better understand the server side flow and the protobuffers.

备注:参照链码泳道图以获得服务端流程和协议缓冲的更好理解。

Hyperledger Fabric SDKs

超级账本 SDKs

Hyperledger Fabric intends to offer a number of SDKs for a wide variety of programming languages. The first two delivered are the Node.js and Java SDKs. We hope to provide Python and Go SDKs soon after the 1.0.0 release.

超级账本Fabric拟就各种各样的编程语言提供一系列的SDKs。最先交付的是Node.js和Java SKDs。我们希望提供在1.0.0版本后提供Python和Go SDKs。

Channels

A Hyperledger Fabric channel is a private “subnet” of communication between two or more specific network members, for the purpose of conducting private and confidential transactions. A channel is defined by members (organizations), anchor peers per member, the shared ledger, chaincode application(s) and the ordering service node(s). Each transaction on the network is executed on a channel, where each party must be authenticated and authorized to transact on that channel. Each peer that joins a channel, has its own identity given by a membership services provider (MSP), which authenticates each peer to its channel peers and services.

在超级账本Fabric中,一个 通道 是指一个在两个或多个特定网络成员间的专门为私人的和机密的交易为目的而建立的私有“子网”。一个通道的定义中包含:成员(组织),每个成员的锚节点,共享帐本,链上代码应用程序和排序服务节点。网络上的每个交易都在一个指定的通道中执行,在通道中交易必须通过通道中的每部分的认证和授权。要加入一个通道的每个节点都必须有自己的通过成员服务提供商(MSP)获得的身份标识,用于鉴定每个节点在通道中的是什么节点和服务。

To create a new channel, the client SDK calls configuration system chaincode and references properties such as anchor peer**s, and members (organizations). This request creates a **genesis block for the channel ledger, which stores configuration information about the channel policies, members and anchor peers. When adding a new member to an existing channel, either this genesis block, or if applicable, a more recent reconfiguration block, is shared with the new member.

要创建一个通道,客户端SDK调用配置系统链上代码和参考属性,比如 锚节点 和成员(组织)。这个请求会为通道的账本创建一个创世区块,用于存储关于通道的策略,成员和锚节点的配置信息。当需要添加一个新成员到现有通道时,这个创世区块,或者最新的新配置区块(如果可用),将会共享给这个新成员。

注解

See the Channel Configuration (configtx) section for more more details on the properties and proto structures of config transactions.

The election of a leading peer for each member on a channel determines which peer communicates with the ordering service on behalf of the member. If no leader is identified, an algorithm can be used to identify the leader. The consensus service orders transactions and delivers them, in a block, to each leading peer, which then distributes the block to its member peers, and across the channel, using the gossip protocol.

从通道的所有节点中选举出的 领导节点 决定哪个节点用于代表其他成员节点与排序服务通讯。如果还没有领导节点,那么一个算法可以用于标识出领导节点。共识服务对交易进行排序,并打包成区块,发送区块给每个领导节点,然后领导节点把区块分发给其成员节点,然后使用 gossip 协议穿过通道。

Although any one anchor peer can belong to multiple channels, and therefore maintain multiple ledgers, no ledger data can pass from one channel to another. This separation of ledgers, by channel, is defined and implemented by configuration chaincode, the identity membership service and the gossip data dissemination protocol. The dissemination of data, which includes information on transactions, ledger state and channel membership, is restricted to peers with verifiable membership on the channel. This isolation of peers and ledger data, by channel, allows network members that require private and confidential transactions to coexist with business competitors and other restricted members, on the same blockchain network.

虽然任意一个锚节点都可以属于多个通道,而且维护了多个账本,但是不会有任何账本数据会从一个通道传到另一个通道。这就是根据通道对账本的分离,这种分离是在配置链上代码,成员标识服务和gossip传播协议中定义和实现。数据的传播,包括交易的信息,账本状态和通道成员等都在通道内受限制的验证成员身份的节点之间。这种根据通道对节点和账本数据进行隔离,允许网络成员可以在同一个区块链网络中请求私有的和保密的交易给业务上的竞争对手和其他受限的成员。

Capability Requirements

Because Fabric is a distributed system that will usually involve multiple organizations (sometimes in different countries or even continents), it is possible (and typical) that many different versions of Fabric code will exist in the network. Nevertheless, it’s vital that networks process transactions in the same way so that everyone has the same view of the current network state.

This means that every network – and every channel within that network – must define a set of what we call “capabilities” to be able to participate in processing transactions. For example, Fabric v1.1 introduces new MSP role types of “Peer” and “Client”. However, if a v1.0 peer does not understand these new role types, it will not be able to appropriately evaluate an endorsement policy that references them. This means that before the new role types may be used, the network must agree to enable the v1.1 channel capability requirement, ensuring that all peers come to the same decision.

Only binaries which support the required capabilities will be able to participate in the channel, and newer binary versions will not enable new validation logic until the corresponding capability is enabled. In this way, capability requirements ensure that even with disparate builds and versions, it is not possible for the network to suffer a state fork.

Defining Capability Requirements

Capability requirements are defined per channel in the channel configuration (found in the channel’s most recent configuration block). The channel configuration contains three locations, each of which defines a capability of a different type.

Capability Type Canonical Path JSON Path
Channel /Channel/Capabilities .channel_group.values.Capabilities
Orderer /Channel/Orderer/Capabilities .channel_group.groups.Orderer.values.Capabilities
Application /Channel/Application/Capabilities .channel_group.groups.Application.values. Capabilities
  • Channel: these capabilities apply to both peer and orderers and are located in the root Channel group.
  • Orderer: apply to orderers only and are located in the Orderer group.
  • Application: apply to peers only and are located in the Application group.

The capabilities are broken into these groups in order to align with the existing administrative structure. Updating orderer capabilities is something the ordering orgs would manage independent of the application orgs. Similarly, updating application capabilities is something only the application admins would manage. By splitting the capabilities between “Orderer” and “Application”, a hypothetical network could run a v1.6 ordering service while supporting a v1.3 peer application network.

However, some capabilities cross both the ‘Application’ and ‘Orderer’ groups. As we saw earlier, adding a new MSP role type is something both the orderer and application admins agree to and need to recognize. The orderer must understand the meaning of MSP roles in order to allow the transactions to pass through ordering, while the peers must understand the roles in order to validate the transaction. These kinds of capabilities – which span both the application and orderer components – are defined in the top level “Channel” group.

注解

It is possible that the channel capabilities are defined to be at version v1.3 while the orderer and application capabilities are defined to be at version 1.1 and v1.4, respectively. Enabling a capability at the “Channel” group level does not imply that this same capability is available at the more specific “Orderer” and “Application” group levels.

Setting Capabilities

Capabilities are set as part of the channel configuration (either as part of the initial configuration – which we’ll talk about in a moment – or as part of a reconfiguration).

注解

We have a two documents that talk through different aspects of channel reconfigurations. First, we have a tutorial that will take you through the process of Adding an Org to a Channel – 向通道添加组织. And we also have a document that talks through Updating a Channel Configuration which gives an overview of the different kinds of updates that are possible as well as a fuller look at the signature process.

Because new channels copy the configuration of the Orderer System Channel by default, new channels will automatically be configured to work with the orderer and channel capabilities of the Orderer System Channel and the application capabilities specified by the channel creation transaction. Channels that already exist, however, must be reconfigured.

The schema for the Capabilities value is defined in the protobuf as:

message Capabilities {
      map<string, Capability> capabilities = 1;
}

message Capability { }

As an example, rendered in JSON:

{
    "capabilities": {
        "V1_1": {}
    }
}
Capabilities in an Initial Configuration

In the configtx.yaml file distributed in the config directory of the release artifacts, there is a Capabilities section which enumerates the possible capabilities for each capability type (Channel, Orderer, and Application).

The simplest way to enable capabilities is to pick a v1.1 sample profile and customize it for your network. For example:

SampleSingleMSPSoloV1_1:
    Capabilities:
        <<: *GlobalCapabilities
    Orderer:
        <<: *OrdererDefaults
        Organizations:
            - *SampleOrg
        Capabilities:
            <<: *OrdererCapabilities
    Consortiums:
        SampleConsortium:
            Organizations:
                - *SampleOrg

Note that there is a Capabilities section defined at the root level (for the channel capabilities), and at the Orderer level (for orderer capabilities). The sample above uses a YAML reference to include the capabilities as defined at the bottom of the YAML.

When defining the orderer system channel there is no Application section, as those capabilities are defined during the creation of an application channel. To define a new channel’s application capabilities at channel creation time, the application admins should model their channel creation transaction after the SampleSingleMSPChannelV1_1 profile.

SampleSingleMSPChannelV1_1:
     Consortium: SampleConsortium
     Application:
         Organizations:
             - *SampleOrg
         Capabilities:
             <<: *ApplicationCapabilities

Here, the Application section has a new element Capabilities which references the ApplicationCapabilities section defined at the end of the YAML.

注解

The capabilities for the Channel and Orderer sections are inherited from the definition in the ordering system channel and are automatically included by the orderer during the process of channel creation.

CouchDB as the State Database

State Database options

State database options include LevelDB and CouchDB. LevelDB is the default key/value state database embedded in the peer process. CouchDB is an optional alternative external state database. Like the LevelDB key/value store, CouchDB can store any binary data that is modeled in chaincode (CouchDB attachment functionality is used internally for non-JSON binary data). But as a JSON document store, CouchDB additionally enables rich query against the chaincode data, when chaincode values (e.g. assets) are modeled as JSON data.

Both LevelDB and CouchDB support core chaincode operations such as getting and setting a key (asset), and querying based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of owner,asset_id can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger.

If you model assets as JSON and use CouchDB, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. These types of queries are excellent for understanding what is on the ledger. Proposal responses for these types of queries are typically useful to the client application, but are not typically submitted as transactions to the ordering service. In fact, there is no guarantee the result set is stable between chaincode execution and commit time for rich queries, and therefore rich queries are not appropriate for use in update transactions, unless your application can guarantee the result set is stable between chaincode execution time and commit time, or can handle potential changes in subsequent transactions. For example, if you perform a rich query for all assets owned by Alice and transfer them to Bob, a new asset may be assigned to Alice by another transaction between chaincode execution time and commit time, and you would miss this “phantom” item.

CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms of setup, management, and operations. You may consider starting with the default embedded LevelDB, and move to CouchDB if you require the additional complex rich queries. It is a good practice to model chaincode asset data as JSON, so that you have the option to perform complex rich queries if needed in the future.

注解

A JSON document cannot use the following field names at the top level. These are reserved for internal use.

  • _deleted
  • _id
  • _rev
  • ~version

Using CouchDB from Chaincode

Most of the chaincode shim APIs can be utilized with either LevelDB or CouchDB state database, e.g. GetState, PutState, GetStateByRange, GetStateByPartialCompositeKey. Additionally when you utilize CouchDB as the state database and model assets as JSON in chaincode, you can perform rich queries against the JSON in the state database by using the GetQueryResult API and passing a CouchDB query string. The query string follows the CouchDB JSON query syntax.

The marbles02 fabric sample demonstrates use of CouchDB queries from chaincode. It includes a queryMarblesByOwner() function that demonstrates parameterized queries by passing an owner id into chaincode. It then queries the state data for JSON documents matching the docType of “marble” and the owner id using the JSON query syntax:

{"selector":{"docType":"marble","owner":<OWNER_ID>}}

Indexes in CouchDB are required in order to make JSON queries efficient and are required for any JSON query with a sort. Indexes can be packaged alongside chaincode in a /META-INF/statedb/couchdb/indexes directory. Each index must be defined in its own text file with extension *.json with the index definition formatted in JSON following the CouchDB index JSON syntax. For example, to support the above marble query, a sample index on the docType and owner fields is provided:

{"index":{"fields":["docType","owner"]},"ddoc":"indexOwnerDoc", "name":"indexOwner","type":"json"}

The sample index can be found here.

Any index in the chaincode’s META-INF/statedb/couchdb/indexes directory will be packaged up with the chaincode for deployment. When the chaincode is both installed on a peer and instantiated on one of the peer’s channels, the index will automatically be deployed to the peer’s channel and chaincode specific state database (if it has been configured to use CouchDB). If you install the chaincode first and then instantiate the chaincode on the channel, the index will be deployed at chaincode instantiation time. If the chaincode is already instantiated on a channel and you later install the chaincode on a peer, the index will be deployed at chaincode installation time.

Upon deployment, the index will automatically be utilized by chaincode queries. CouchDB can automatically determine which index to use based on the fields being used in a query. Alternatively, in the selector query the index can be specified using the use_index keyword.

The same index may exist in subsequent versions of the chaincode that gets installed. To change the index, use the same index name but alter the index definition. Upon installation/instantiation, the index definition will get re-deployed to the peer’s state database.

If you have a large volume of data already, and later install the chaincode, the index creation upon installation may take some time. Similarly, if you have a large volume of data already and instantiate a subsequent version of the chaincode, the index creation may take some time. Avoid calling chaincode functions that query the state database at these times as the chaincode query may time out while the index is getting initialized. During transaction processing, the indexes will automatically get refreshed as blocks are committed to the ledger.

CouchDB Configuration

CouchDB is enabled as the state database by changing the stateDatabase configuration option from goleveldb to CouchDB. Additionally, the couchDBAddress needs to configured to point to the CouchDB to be used by the peer. The username and password properties should be populated with an admin username and password if CouchDB is configured with a username and password. Additional options are provided in the couchDBConfig section and are documented in place. Changes to the core.yaml will be effective immediately after restarting the peer.

You can also pass in docker environment variables to override core.yaml values, for example CORE_LEDGER_STATE_STATEDATABASE and CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS.

Below is the stateDatabase section from core.yaml:

state:
  # stateDatabase - options are "goleveldb", "CouchDB"
  # goleveldb - default state database stored in goleveldb.
  # CouchDB - store state database in CouchDB
  stateDatabase: goleveldb
  couchDBConfig:
     # It is recommended to run CouchDB on the same server as the peer, and
     # not map the CouchDB container port to a server port in docker-compose.
     # Otherwise proper security must be provided on the connection between
     # CouchDB client (on the peer) and server.
     couchDBAddress: couchdb:5984
     # This username must have read and write authority on CouchDB
     username:
     # The password is recommended to pass as an environment variable
     # during start up (e.g. LEDGER_COUCHDBCONFIG_PASSWORD).
     # If it is stored here, the file must be access control protected
     # to prevent unintended users from discovering the password.
     password:
     # Number of retries for CouchDB errors
     maxRetries: 3
     # Number of retries for CouchDB errors during peer startup
     maxRetriesOnStartup: 10
     # CouchDB request timeout (unit: duration, e.g. 20s)
     requestTimeout: 35s
     # Limit on the number of records to return per query
     queryLimit: 10000

CouchDB hosted in docker containers supplied with Hyperledger Fabric have the capability of setting the CouchDB username and password with environment variables passed in with the COUCHDB_USER and COUCHDB_PASSWORD environment variables using Docker Compose scripting.

For CouchDB installations outside of the docker images supplied with Fabric, the local.ini file of that installation must be edited to set the admin username and password.

Docker compose scripts only set the username and password at the creation of the container. The local.ini file must be edited if the username or password is to be changed after creation of the container.

注解

CouchDB peer options are read on each peer startup.

Peer channel-based event services

General overview

In previous versions of Fabric, the peer event service was known as the event hub. This service sent events any time a new block was added to the peer’s ledger, regardless of the channel to which that block pertained, and it was only accessible to members of the organization running the eventing peer (i.e., the one being connected to for events).

Starting with v1.1, there are two new services which provide events. These services use an entirely different design to provide events on a per-channel basis. This means that registration for events occurs at the level of the channel instead of the peer, allowing for fine-grained control over access to the peer’s data. Requests to receive events are accepted from identities outside of the peer’s organization (as defined by the channel configuration). This also provides greater reliability and a way to receive events that may have been missed (whether due to a connectivity issue or because the peer is joining a network that has already been running).

Available services

  • Deliver

This service sends entire blocks that have been committed to the ledger. If any events were set by a chaincode, these can be found within the ChaincodeActionPayload of the block.

  • DeliverFiltered

This service sends “filtered” blocks, minimal sets of information about blocks that have been committed to the ledger. It is intended to be used in a network where owners of the peers wish for external clients to primarily receive information about their transactions and the status of those transactions. If any events were set by a chaincode, these can be found within the FilteredChaincodeAction of the filtered block.

注解

The payload of chaincode events will not be included in filtered blocks.

How to register for events

Registration for events from either service is done by sending an envelope containing a deliver seek info message to the peer that contains the desired start and stop positions, the seek behavior (block until ready or fail if not ready). There are helper variables SeekOldest and SeekNewest that can be used to indicate the oldest (i.e. first) block or the newest (i.e. last) block on the ledger. To have the services send events indefinitely, the SeekInfo message should include a stop position of MAXINT64.

注解

If mutual TLS is enabled on the peer, the TLS certificate hash must be set in the envelope’s channel header.

By default, both services use the Channel Readers policy to determine whether to authorize requesting clients for events.

Overview of deliver response messages

The event services send back DeliverResponse messages.

Each message contains one of the following:

  • status – HTTP status code. Both services will return the appropriate failure code if any failure occurs; otherwise, it will return 200 - SUCCESS once the service has completed sending all information requested by the SeekInfo message.
  • block – returned only by the Deliver service.
  • filtered block – returned only by the DeliverFiltered service.

A filtered block contains:

  • channel ID.

  • number (i.e. the block number).

  • array of filtered transactions.

  • transaction ID.

    • type (e.g. ENDORSER_TRANSACTION, CONFIG.
    • transaction validation code.
  • filtered transaction actions.
    • array of filtered chaincode actions.
      • chaincode event for the transaction (with the payload nilled out).

SDK event documentation

For further details on using the event services, refer to the SDK documentation.

Read-Write set semantics - 读写集语义

This documents discusses the details of the current implementation about the semantics of read-write sets.

本文档讨论了当前代码实现中读写集语义的具体细节。

Transaction simulation and read-write set - 交易模拟和读写集

During simulation of a transaction at an endorser, a read-write set is prepared for the transaction. The read set contains a list of unique keys and their committed versions that the transaction reads during simulation. The write set contains a list of unique keys (though there can be overlap with the keys present in the read set) and their new values that the transaction writes. A delete marker is set (in the place of new value) for the key if the update performed by the transaction is to delete the key.

背书节点 模拟执行交易期间,会生成该交易对应的一个读写集。读集合(read set) 包含了在模拟执行交易期间,所读取的一组不重复的 key 及其版本号的列表。写集合(write set) 包含了一组不重复的 key(这些 key 可能和读集合中的 key 有重合)以及该交易写入的这些 key 对应的值。如果交易对应的更新操作是删除某个 key,则该 key 对应的值会被设置删除标记。

Further, if the transaction writes a value multiple times for a key, only the last written value is retained. Also, if a transaction reads a value for a key, the value in the committed state is returned even if the transaction has updated the value for the key before issuing the read. In another words, Read-your-writes semantics are not supported.

进一步的,如果交易对一个 key 对应的值进行多次写入,只有最后一次的修改会被保留。同样,如果交易读取一个 key 对应的 值,只有之前已经提交的值会被返回,即使在读操作前本交易已经对该 key 的值进行了修改。换而言之,不支持读取同一交易中刚修改的值。

As noted earlier, the versions of the keys are recorded only in the read set; the write set just contains the list of unique keys and their latest values set by the transaction.

如前所述,key 对应的版本号只在读集合中被记录;写集合中只包含一组不重复的 key 及其在交易中设置的的最新值。

There could be various schemes for implementing versions. The minimal requirement for a versioning scheme is to produce non-repeating identifiers for a given key. For instance, using monotonically increasing numbers for versions can be one such scheme. In the current implementation, we use a blockchain height based versioning scheme in which the height of the committing transaction is used as the latest version for all the keys modified by the transaction. In this scheme, the height of a transaction is represented by a tuple (txNumber is the height of the transaction within the block). This scheme has many advantages over the incremental number scheme - primarily, it enables other components such as statedb, transaction simulation and validation for making efficient design choices.

有多种方案可以用于实现读集合的版本控制。版本控制方案的最基本要求是为 key 生成一个不会重复的标识符。例如,可以使用单调递增的数字来作为版本号。在目前的代码实现中,我们采用了一个基于区块链高度的版本号方案,在该方案中,待提交交易的高度作为该交易对应写集合的所有 key 的最新版本号。交易高度可以用一个元组表示,其中 txNumber 是交易在区块中的高度。该方案相较于递增数字方案有诸多优点,它使得其他组件(例如状态数据库、交易模拟以及验证等)可以选择更高效的设计方案。

Following is an illustration of an example read-write set prepared by simulation of a hypothetical transaction. For the sake of simplicity, in the illustrations, we use the incremental numbers for representing the versions.

如下是一个读写集的示例说明,该读写集通过模拟一个假设交易而生成。为了简单起见,在该示例中我们使用递增数字来表示版本号。

<TxReadWriteSet>
  <NsReadWriteSet name="chaincode1">
    <read-set>
      <read key="K1", version="1">
      <read key="K2", version="1">
    </read-set>
    <write-set>
      <write key="K1", value="V1"
      <write key="K3", value="V2"
      <write key="K4", isDelete="true"
    </write-set>
  </NsReadWriteSet>
<TxReadWriteSet>

Additionally, if the transaction performs a range query during simulation, the range query as well as its results will be added to the read-write set as query-info.

此外,如果在交易模拟执行中进行了一个批量查询,批量查询及其结果都会被添加到读写集的 query-info

Transaction validation and updating world state using read-write set - 交易验证以及使用读写集更新世界状态

A committer uses the read set portion of the read-write set for checking the validity of a transaction and the write set portion of the read-write set for updating the versions and the values of the affected keys.

提交者(committer) 使用读写集中的读集合来验证交易的合法性,使用读写集中的写集合来更新对应 key 的值和版本号。

In the validation phase, a transaction is considered valid if the version of each key present in the read set of the transaction matches the version for the same key in the world state - assuming all the preceding valid transactions (including the preceding transactions in the same block) are committed (committed-state). An additional validation is performed if the read-write set also contains one or more query-info.

在验证阶段,交易被认为 有效(valid) 的条件是该交易读集合中的每一个 key 对应的版本号同最新世界状态中该 key 对应的版本号全都一致,这个最新世界状态是之前所有 有效 交易(包括同一区块中排在前面的交易)提交之后的状态。如果读写集包含一个或者多个 query-info,则需要进行一个额外的验证。

This additional validation should ensure that no key has been inserted/deleted/updated in the super range (i.e., union of the ranges) of the results captured in the query-info(s). In other words, if we re-execute any of the range queries (that the transaction performed during simulation) during validation on the committed-state, it should yield the same results that were observed by the transaction at the time of simulation. This check ensures that if a transaction observes phantom items during commit, the transaction should be marked as invalid. Note that the this phantom protection is limited to range queries (i.e., GetStateByRange function in the chaincode) and not yet implemented for other queries (i.e., GetQueryResult function in the chaincode). Other queries are at risk of phantoms, and should therefore only be used in read-only transactions that are not submitted to ordering, unless the application can guarantee the stability of the result set between simulation and validation/commit time.

该额外验证需要确保在一定范围之内(例如多个范围的联合)没有 key 被插入、删除或更新。换句话说,如果我们在验证阶段重新执行批量查询(正如在交易模拟执行阶段一样),得到的查询结果应该和在交易模拟执行阶段完全一致。此校验确保交易在提交时如果出现幻读会被认为是无效的。值得注意的是,这种针对幻读的保护只被用于部分批量查询(例如链码中的 GetStateByRange 函数),并没有被用于其他的批量查询(例如链码中的 GetQueryResult 函数)。没有保护的批量查询方法会有幻读风险,因此这种查询应该只用于不会被提交到排序服务的 只读交易,除非应用方能保证交易模拟和交易验证提交两阶段之间结果集是稳定不变的。

If a transaction passes the validity check, the committer uses the write set for updating the world state. In the update phase, for each key present in the write set, the value in the world state for the same key is set to the value as specified in the write set. Further, the version of the key in the world state is changed to reflect the latest version.

如果一个交易通过了有效性检查,提交者使用写集合更新世界状态。在更新阶段,对于写集合中的每一个 key,世界状态中该 key 对应的值都被设置为写集合中的值。进一步的,世界状态中该 key 对应的版本号也会被修改为最新的版本号。

Example simulation and validation - 模拟和验证示例

This section helps with understanding the semantics through an example scenario. For the purpose of this example, the presence of a key, k, in the world state is represented by a tuple (k,ver,val) where ver is the latest version of the key k having val as its value.

本节通过一个具体的实例来帮助理解读写集语义。为了理解实例方便,世界状态中 key k 使用如下元组来表示 (k,ver,val),其中 ver 表示该 key k 对应的最新版本号,val 表示其对应的值。

Now, consider a set of five transactions T1, T2, T3, T4, and T5, all simulated on the same snapshot of the world state. The following snippet shows the snapshot of the world state against which the transactions are simulated and the sequence of read and write activities performed by each of these transactions.

现在,考虑 5 个交易的集合 T1, T2, T3, T4 T5,每个交易都是基于相同的世界状态快照进行模拟执行。如下代码展示了交易模拟执行对应的世界状态,以及每个交易中所包含的读写动作。

World state: (k1,1,v1), (k2,1,v2), (k3,1,v3), (k4,1,v4), (k5,1,v5)
T1 -> Write(k1, v1'), Write(k2, v2')
T2 -> Read(k1), Write(k3, v3')
T3 -> Write(k2, v2'')
T4 -> Write(k2, v2'''), read(k2)
T5 -> Write(k6, v6'), read(k5)

Now, assume that these transactions are ordered in the sequence of T1,..,T5 (could be contained in a single block or different blocks)

现在,假设这些交易的排序如下 T1,...,T5(可能包含在同一区块或多个区块中)

  1. T1 passes validation because it does not perform any read. Further, the tuple of keys k1 and k2 in the world state are updated to (k1,2,v1'), (k2,2,v2')
  1. T1 通过验证,因为它没有任何的读操作。 随后,世界状态中 k1k2 对应的元组被更新为 (k1,2,v1'), (k2,2,v2')
  2. T2 fails validation because it reads a key, k1, which was modified by a preceding transaction - T1
  1. T2 验证失败,因为它读取了 key k1,而 k1 在之前的 T1 交易中被修改
  2. T3 passes the validation because it does not perform a read. Further the tuple of the key, k2, in the world state is updated to (k2,3,v2'')
  1. T3 通过验证,因为它没有任何的读操作。 随后,世界状态中 k2 对应的元组被更新为 (k2,3,v2'')
  2. T4 fails the validation because it reads a key, k2, which was modified by a preceding transaction T1
  1. T4 验证失败,因为它读取了 key k2,而 k2 在之前的 T1 交易中被修改
  2. T5 passes validation because it reads a key, k5, which was not modified by any of the preceding transactions
  1. T5 通过验证,因为它只读取了 key k5,而 k5 在之前的所有交易中都并未被修改

Note: Transactions with multiple read-write sets are not yet supported.

注意: 暂不支持包含多个读写集的交易。

Gossip data dissemination protocol

Hyperledger Fabric optimizes blockchain network performance, security and scalability by dividing workload across transaction execution (endorsing and committing) peers and transaction ordering nodes. This decoupling of network operations requires a secure, reliable and scalable data dissemination protocol to ensure data integrity and consistency. To meet these requirements, Hyperledger Fabric implements a gossip data dissemination protocol.

Gossip protocol

Peers leverage gossip to broadcast ledger and channel data in a scalable fashion. Gossip messaging is continuous, and each peer on a channel is constantly receiving current and consistent ledger data, from multiple peers. Each gossiped message is signed, thereby allowing Byzantine participants sending faked messages to be easily identified and the distribution of the message(s) to unwanted targets to be prevented. Peers affected by delays, network partitions or other causations resulting in missed blocks, will eventually be synced up to the current ledger state by contacting peers in possession of these missing blocks.

The gossip-based data dissemination protocol performs three primary functions on a Hyperledger Fabric network:

  1. Manages peer discovery and channel membership, by continually identifying available member peers, and eventually detecting peers that have gone offline.
  2. Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data.
  3. Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data.

Gossip-based broadcasting operates by peers receiving messages from other peers on the channel, and then forwarding these messages to a number of randomly-selected peers on the channel, where this number is a configurable constant. Peers can also exercise a pull mechanism, rather than waiting for delivery of a message. This cycle repeats, with the result of channel membership, ledger and state information continually being kept current and in sync. For dissemination of new blocks, the leader peer on the channel pulls the data from the ordering service and initiates gossip dissemination to peers.

Leader election

The leader election mechanism is used to elect one peer per each organization which will maintain connection with ordering service and initiate distribution of newly arrived blocks across peers of its own organization. Leveraging leader election provides system with ability to efficiently utilize bandwidth of the ordering service. There are two possible operation modes for leader election module:

  1. Static - system administrator manually configures one peer in the organization to be the leader, e.g. one to maintain open connection with the ordering service.
  2. Dynamic - peers execute a leader election procedure to select one peer in an organization to become leader, pull blocks from the ordering service, and disseminate blocks to the other peers in the organization..
Static leader election

Using static leader election allows to manually define a set of leader peers within the organization, it’s possible to define a single node to be a leader or all available peers, it should be taken into account that - making too many peers to connect to the ordering service might lead to inefficient bandwidth utilization. To enable static leader election mode, configure the following parameters within the section of core.yaml:

peer:
    # Gossip related configuration
    gossip:
        useLeaderElection: false
        orgLeader: true

Alternatively these parameters could be configured and overrided with environmental variables:

export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=true
Note:
  1. Following configuration will keep peer in stand-by mode, i.e. peer will not try to become a leader:
export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=false
  1. Setting CORE_PEER_GOSSIP_USELEADERELECTION and CORE_PEER_GOSSIP_USELEADERELECTION with true value is ambiguous and will lead to an error.
  2. In static configuration organization admin is responsible to provide high availability of the leader node in case for failure or crashes.
Dynamic leader election

Dynamic leader election enables organization peers to elect one peer which will connect to the ordering service and pull out new blocks. Leader is elected for set of peers for each organization independently.

Elected leader is responsible to send the heartbeat messages to the rest of the peers as an evidence of liveness. If one or more peers won’t get heartbeats updates during period of time, they will initiate a new round of leader election procedure, eventually selecting a new leader. In case of a network partition in the worst case there will be more than one active leader for organization thus to guarantee resiliency and availability allowing the organization’s peers to continue making progress. After the network partition is healed one of the leaders will relinquish its leadership, therefore in steady state and in no presence of network partitions for each organization there will be only one active leader connecting to the ordering service.

Following configuration controls frequency of the leader heartbeat messages:

peer:
    # Gossip related configuration
    gossip:
        election:
            leaderAliveThreshold: 10s

In order to enable dynamic leader election, the following parameters need to be configured within core.yaml:

peer:
    # Gossip related configuration
    gossip:
        useLeaderElection: true
        orgLeader: false

Alternatively these parameters could be configured and overrided with environmental variables:

export CORE_PEER_GOSSIP_USELEADERELECTION=true
export CORE_PEER_GOSSIP_ORGLEADER=false

Gossip messaging

Online peers indicate their availability by continually broadcasting “alive” messages, with each containing the public key infrastructure (PKI) ID and the signature of the sender over the message. Peers maintain channel membership by collecting these alive messages; if no peer receives an alive message from a specific peer, this “dead” peer is eventually purged from channel membership. Because “alive” messages are cryptographically signed, malicious peers can never impersonate other peers, as they lack a signing key authorized by a root certificate authority (CA).

In addition to the automatic forwarding of received messages, a state reconciliation process synchronizes world state across peers on each channel. Each peer continually pulls blocks from other peers on the channel, in order to repair its own state if discrepancies are identified. Because fixed connectivity is not required to maintain gossip-based data dissemination, the process reliably provides data consistency and integrity to the shared ledger, including tolerance for node crashes.

Because channels are segregated, peers on one channel cannot message or share information on any other channel. Though any peer can belong to multiple channels, partitioned messaging prevents blocks from being disseminated to peers that are not in the channel by applying message routing policies based on peers’ channel subscriptions.

Notes:
1. Security of point-to-point messages are handled by the peer TLS layer, and do not require signatures. Peers are authenticated by their certificates, which are assigned by a CA. Although TLS certs are also used, it is the peer certificates that are authenticated in the gossip layer. Ledger blocks are signed by the ordering service, and then delivered to the leader peers on a channel. 2. Authentication is governed by the membership service provider for the peer. When the peer connects to the channel for the first time, the TLS session binds with the membership identity. This essentially authenticates each peer to the connecting peer, with respect to membership in the network and channel.

Hyperledger Fabric FAQ

Endorsement

Endorsement architecture:

  1. How many peers in the network need to endorse a transaction?

A. The number of peers required to endorse a transaction is driven by the endorsement policy that is specified at chaincode deployment time.

  1. Does an application client need to connect to all peers?

A. Clients only need to connect to as many peers as are required by the endorsement policy for the chaincode.

Security & Access Control

Data Privacy and Access Control:

  1. How do I ensure data privacy?

A. There are various aspects to data privacy. First, you can segregate your network into channels, where each channel represents a subset of participants that are authorized to see the data for the chaincodes that are deployed to that channel. Second, within a channel you can restrict the input data to chaincode to the set of endorsers only, by using visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. Third, you can hash or encrypt the data before calling chaincode. If you hash the data then you will need to provide a means to share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys. Fourth, you can restrict data access to certain roles in your organization, by building access control into the chaincode logic. Fifth, ledger data at rest can be encrypted via file system encryption on the peer, and data in-transit is encrypted via TLS.

  1. Do the orderers see the transaction data?

A. No, the orderers only order transactions, they do not open the transactions. If you do not want the data to go through the orderers at all, and you are only concerned about the input data, then you can use visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. Therefore, the input data can be private to the endorsers only. If you do not want the orderers to see chaincode output, then you can hash or encrypt the data before calling chaincode. If you hash the data then you will need to provide a meansto share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys.

Application-side Programming Model

Transaction execution result:

  1. How do application clients know the outcome of a transaction?

A. The transaction simulation results are returned to the client by the endorser in the proposal response. If there are multiple endorsers, the client can check that the responses are all the same, and submit the results and endorsements for ordering and commitment. Ultimately the committing peers will validate or invalidate the transaction, and the client becomes aware of the outcome via an event, that the SDK makes available to the application client.

Ledger queries:

  1. How do I query the ledger data?

A. Within chaincode you can query based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of (owner,asset_id) can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger.

If you model asset data as JSON in chaincode and use CouchDB as the state database, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. The application client can perform read-only queries, but these responses are not typically submitted as part of transactions to the ordering service.

  1. How do I query the historical data to understand data provenance?

A. The chaincode API GetHistoryForKey() will return history of values for a key.

Q. How to guarantee the query result is correct, especially when the peer being queried may be recovering and catching up on block processing?

A. The client can query multiple peers, compare their block heights, compare their query results, and favor the peers at the higher block heights.

Chaincode (Smart Contracts and Digital Assets)

  1. Does Hyperledger Fabric support smart contract logic?

A. Yes. We call this feature Chaincode - 链码. It is our interpretation of the smart contract method/algorithm, with additional features.

A chaincode is programmatic code deployed on the network, where it is executed and validated by chain validators together during the consensus process. Developers can use chaincodes to develop business contracts, asset definitions, and collectively-managed decentralized applications.

  1. How do I create a business contract?

A. There are generally two ways to develop business contracts: the first way is to code individual contracts into standalone instances of chaincode; the second way, and probably the more efficient way, is to use chaincode to create decentralized applications that manage the life cycle of one or multiple types of business contracts, and let end users instantiate instances of contracts within these applications.

  1. How do I create assets?

A. Users can use chaincode (for business rules) and membership service (for digital tokens) to design assets, as well as the logic that manages them.

There are two popular approaches to defining assets in most blockchain solutions: the stateless UTXO model, where account balances are encoded into past transaction records; and the account model, where account balances are kept in state storage space on the ledger.

Each approach carries its own benefits and drawbacks. This blockchain technology does not advocate either one over the other. Instead, one of our first requirements was to ensure that both approaches can be easily implemented.

  1. Which languages are supported for writing chaincode?

A. Chaincode can be written in any programming language and executed in containers. The first fully supported chaincode language is Golang.

Support for additional languages and the development of a templating language have been discussed, and more details will be released in the near future.

It is also possible to build Hyperledger Fabric applications using Hyperledger Composer.

  1. Does the Hyperledger Fabric have native currency?

A. No. However, if you really need a native currency for your chain network, you can develop your own native currency with chaincode. One common attribute of native currency is that some amount will get transacted (the chaincode defining that currency will get called) every time a transaction is processed on its chain.

Differences in Most Recent Releases

  1. As part of the v1.0.0 release, what are the highlight differences between v0.6 and v1.0?

A. The differences between any subsequent releases are provided together with the Release Notes. Since Fabric is a pluggable modular framework, you can refer to the design-docs for further information of these difference.

  1. Where to get help for the technical questions not answered above?
  1. Please use StackOverflow.

Ordering Service FAQ

General

Question:I have an ordering service up and running and want to switch consensus algorithms. How do I do that?
Answer:This is explicitly not supported.
Question:What is the orderer system channel?
Answer:The orderer system channel (sometimes called ordering system channel) is the channel the orderer is initially bootstrapped with. It is used to orchestrate channel creation. The orderer system channel defines consortia and the initial configuration for new channels. At channel creation time, the organization definition in the consortium, the /Channel group’s values and policies, as well as the /Channel/Orderer group’s values and policies, are all combined to form the new initial channel definition.
Question:If I update my application channel, should I update my orderer system channel?
Answer:Once an application channel is created, it is managed independently of any other channel (including the orderer system channel). Depending on the modification, the change may or may not be desirable to port to other channels. In general, MSP changes should be synchronized across all channels, while policy changes are more likely to be specific to a particular channel.
Question:Can I have an organization act both in an ordering and application role?
Answer:Although this is possible, it is a highly discouraged configuration. By default the /Channel/Orderer/BlockValidation policy allows any valid certificate of the ordering organizations to sign blocks. If an organization is acting both in an ordering and application role, then this policy should be updated to restrict block signers to the subset of certificates authorized for ordering.
Question:I want to write a consensus implementation for Fabric. Where do I begin?
Answer:A consensus plugin needs to implement the Consenter and Chain interfaces defined in the consensus package. There are two plugins built against these interfaces already: solo and kafka. You can study them to take cues for your own implementation. The ordering service code can be found under the orderer package.
Question:I want to change my ordering service configurations, e.g. batch timeout, after I start the network, what should I do?
Answer:This falls under reconfiguring the network. Please consult the topic on configtxlator.

Solo

Question:How can I deploy Solo in production?
Answer:Solo is not intended for production. It is not, and will never be, fault tolerant.

Kafka

Question:

How do I remove a node from the ordering service?

Answer:

This is a two step-process:

  1. Add the node’s certificate to the relevant orderer’s MSP CRL to prevent peers/clients from connecting to it.
  2. Prevent the node from connecting to the Kafka cluster by leveraging standard Kafka access control measures such as TLS CRLs, or firewalling.
Question:I have never deployed a Kafka/ZK cluster before, and I want to use the Kafka-based ordering service. How do I proceed?
Answer:The Hyperledger Fabric documentation assumes the reader generally has the operational expertise to setup, configure, and manage a Kafka cluster (see 须知(Caveat emptor)). If you insist on proceeding without such expertise, you should complete, at a minimum, the first 6 steps of the Kafka Quickstart guide before experimenting with the Kafka-based ordering service.
Question:Where can I find a Docker composition for a network that uses the Kafka-based ordering service?
Answer:Consult the end-to-end CLI example.
Question:Why is there a ZooKeeper dependency in the Kafka-based ordering service?
Answer:Kafka uses it internally for coordination between its brokers.
Question:I’m trying to follow the BYFN example and get a “service unavailable” error, what should I do?
Answer:Check the ordering service’s logs. A “Rejecting deliver request because of consenter error” log message is usually indicative of a connection problem with the Kafka cluster. Ensure that the Kafka cluster is set up properly, and is reachable by the ordering service’s nodes.

BFT

Question:When is a BFT version of the ordering service going to be available?
Answer:No date has been set. We are working towards a release during the 1.x cycle, i.e. it will come with a minor version upgrade in Fabric. Track FAB-33 for updates.

Contributions Welcome!

We welcome contributions to Hyperledger in many forms, and there’s always plenty to do! 欢迎以各种形式贡献超级账本社区,有很多事情期待大家参与。

First things first, please review the Hyperledger Code of Conduct before participating. It is important that we keep things civil.

Maintainers

Active Maintainers

Name Gerrit GitHub RocketChat email
Artem Barger c0rwin c0rwin c0rwin bartem@il.ibm.com
Binh Nguyen binhn binhn binhn binh1010010110@gmail.com
Chris Ferris ChristopherFerris christo4ferris cbf chris.ferris@gmail.com
Dave Enyeart denyeart denyeart dave.enyeart enyeart@us.ibm.com
Gari Singh mastersingh24 mastersingh24 garisingh gari.r.singh@gmail.com
Greg Haskins greg.haskins ghaskins ghaskins gregory.haskins@gmail.com
Jason Yellick jyellick jyellick jyellick jyellick@us.ibm.com
Jim Zhang jimthematrix jimthematrix jimthematrix jim_the_matrix@hotmail.com
Jonathan Levi JonathanLevi JonathanLevi JonathanLevi jonathan@hacera.com
Keith Smith smithbk smithbk smithbk bksmith@us.ibm.com
Kostas Christidis kchristidis kchristidis kostas kostas@gmail.com
Manish Sethi manish-sethi manish-sethi manish-sethi manish.sethi@gmail.com
Srinivasan Muralidharan muralisr muralisrini muralisr srinivasan.muralidharan99@gmail.com
Yacov Manevich yacovm yacovm yacovm yacovm@il.ibm.com
Yaoguo Jiang jiangyaoguo jiangyaoguo jiangyaoguo jiangyaoguo@gmail.com

Retired Maintainers

Gabor Hosszu hgabre gabre hgabor gabor@digitalasset.com
Sheehan Anderson sheehan srderson sheehan sranderson@gmail.com
Tamas Blummer TamasBlummer tamasblummer tamas tamas@digitalasset.com

Using Jira to understand current work items

This document has been created to give further insight into the work in progress towards the Hyperledger Fabric v1 architecture based on the community roadmap. The requirements for the roadmap are being tracked in Jira.

It was determined to organize in sprints to better track and show a prioritized order of items to be implemented based on feedback received. We’ve done this via boards. To see these boards and the priorities click on Boards -> Manage Boards:

Jira boards

Jira boards

Now on the left side of the screen click on All boards:

Jira boards

Jira boards

On this page you will see all the public (and restricted) boards that have been created. If you want to see the items with current sprint focus, click on the boards where the column labeled Visibility is All Users and the column Board type is labeled Scrum. For example the Board Name Consensus:

Jira boards

Jira boards

When you click on Consensus under Board name you will be directed to a page that contains the following columns:

Jira boards

Jira boards

The meanings to these columns are as follows:

  • Backlog – list of items slated for the current sprint (sprints are defined in 2 week iterations), but are not currently in progress
  • In progress – items currently being worked by someone in the community.
  • In Review – items waiting to be reviewed and merged in Gerrit
  • Done – items merged and complete in the sprint.

If you want to see all items in the backlog for a given feature set, click on the stacked rows on the left navigation of the screen:

Jira boards

Jira boards

This shows you items slated for the current sprint at the top, and all items in the backlog at the bottom. Items are listed in priority order.

If there is an item you are interested in working on, want more information or have questions, or if there is an item that you feel needs to be in higher priority, please add comments directly to the Jira item. All feedback and help is very much appreciated.

Setting up the development environment

Overview

Prior to the v1.0.0 release, the development environment utilized Vagrant running an Ubuntu image, which in turn launched Docker containers as a means of ensuring a consistent experience for developers who might be working with varying platforms, such as macOS, Windows, Linux, or whatever. Advances in Docker have enabled native support on the most popular development platforms: macOS and Windows. Hence, we have reworked our build to take full advantage of these advances. While we still maintain a Vagrant based approach that can be used for older versions of macOS and Windows that Docker does not support, we strongly encourage that the non-Vagrant development setup be used.

Note that while the Vagrant-based development setup could not be used in a cloud context, the Docker-based build does support cloud platforms such as AWS, Azure, Google and IBM to name a few. Please follow the instructions for Ubuntu builds, below.

Prerequisites

  • Git client
  • Go - 1.9 or later (for v1.0.X releases, use Go 1.7.X)
  • (macOS) Xcode must be installed
  • Docker - 17.06.2-ce or later
  • Docker Compose - 1.14.0 or later
  • Pip
  • (macOS) you may need to install gnutar, as macOS comes with bsdtar as the default, but the build uses some gnutar flags. You can use Homebrew to install it as follows:
brew install gnu-tar --with-default-names
  • (macOS) Libtool. You can use Homebrew to install it as follows:
brew install libtool
  • (only if using Vagrant) - Vagrant - 1.9 or later
  • (only if using Vagrant) - VirtualBox - 5.0 or later
  • BIOS Enabled Virtualization - Varies based on hardware
  • Note: The BIOS Enabled Virtualization may be within the CPU or Security settings of the BIOS

pip and behave

pip install --upgrade pip

#PIP packages required for some behave tests
pip install -r devenv/bddtests-requirements.txt

Steps

Set your GOPATH

Make sure you have properly setup your Host’s GOPATH environment variable. This allows for both building within the Host and the VM.

In case you installed Go into a different location from the standard one your Go distribution assumes, make sure that you also set GOROOT environment variable.

Note to Windows users

If you are running Windows, before running any git clone commands, run the following command.

git config --get core.autocrlf

If core.autocrlf is set to true, you must set it to false by running

git config --global core.autocrlf false

If you continue with core.autocrlf set to true, the vagrant up command will fail with the error:

./setup.sh: /bin/bash^M: bad interpreter: No such file or directory

Cloning the Hyperledger Fabric source

Since Hyperledger Fabric is written in Go, you’ll need to clone the source repository to your $GOPATH/src directory. If your $GOPATH has multiple path components, then you will want to use the first one. There’s a little bit of setup needed:

cd $GOPATH/src
mkdir -p github.com/hyperledger
cd github.com/hyperledger

Recall that we are using Gerrit for source control, which has its own internal git repositories. Hence, we will need to clone from Gerrit. For brevity, the command is as follows:

git clone ssh://LFID@gerrit.hyperledger.org:29418/fabric && scp -p -P 29418 LFID@gerrit.hyperledger.org:hooks/commit-msg fabric/.git/hooks/

Note: Of course, you would want to replace LFID with your own Linux Foundation ID.

Bootstrapping the VM using Vagrant

If you are planning on using the Vagrant developer environment, the following steps apply. Again, we recommend against its use except for developers that are limited to older versions of macOS and Windows that are not supported by Docker for Mac or Windows.

cd $GOPATH/src/github.com/hyperledger/fabric/devenv
vagrant up

Go get coffee... this will take a few minutes. Once complete, you should be able to ssh into the Vagrant VM just created.

vagrant ssh

Once inside the VM, you can find the source under $GOPATH/src/github.com/hyperledger/fabric. It is also mounted as /hyperledger.

Building Hyperledger Fabric

Once you have all the dependencies installed, and have cloned the repository, you can proceed to build and test Hyperledger Fabric.

Notes

NOTE: Any time you change any of the files in your local fabric directory (under $GOPATH/src/github.com/hyperledger/fabric), the update will be instantly available within the VM fabric directory.

NOTE: If you intend to run the development environment behind an HTTP Proxy, you need to configure the guest so that the provisioning process may complete. You can achieve this via the vagrant-proxyconf plugin. Install with vagrant plugin install vagrant-proxyconf and then set the VAGRANT_HTTP_PROXY and VAGRANT_HTTPS_PROXY environment variables before you execute vagrant up. More details are available here: https://github.com/tmatilai/vagrant-proxyconf/

NOTE: The first time you run this command it may take quite a while to complete (it could take 30 minutes or more depending on your environment) and at times it may look like it’s not doing anything. As long you don’t get any error messages just leave it alone, it’s all good, it’s just cranking.

NOTE to Windows 10 Users: There is a known problem with vagrant on Windows 10 (see mitchellh/vagrant#6754). If the vagrant up command fails it may be because you do not have the Microsoft Visual C++ Redistributable package installed. You can download the missing package at the following address: http://www.microsoft.com/en-us/download/details.aspx?id=8328

NOTE: The inclusion of the miekg/pkcs11 package introduces an external dependency on the libtdl.h header file during a build of fabric. Please ensure your libtool and libtdhl-dev packages are installed. Otherwise, you may get a ltdl.h header missing error. You can download the missing package by command: sudo apt-get install -y build-essential git make curl unzip g++ libtool.

Building Hyperledger Fabric

The following instructions assume that you have already set up your development environment.

To build Hyperledger Fabric:

cd $GOPATH/src/github.com/hyperledger/fabric
make dist-clean all

Running the unit tests

Use the following sequence to run all unit tests

cd $GOPATH/src/github.com/hyperledger/fabric
make unit-test

To run a subset of tests, set the TEST_PKGS environment variable. Specify a list of packages (separated by space), for example:

export TEST_PKGS="github.com/hyperledger/fabric/core/ledger/..."
make unit-test

To run a specific test use the -run RE flag where RE is a regular expression that matches the test case name. To run tests with verbose output use the -v flag. For example, to run the TestGetFoo test case, change to the directory containing the foo_test.go and call/execute

go test -v -run=TestGetFoo

Running Node.js Client SDK Unit Tests

You must also run the Node.js unit tests to ensure that the Node.js client SDK is not broken by your changes. To run the Node.js unit tests, follow the instructions here.

Running Behave BDD Tests

Note: currently, the behave tests must be run from within in the Vagrant environment. See the development environment setup instructions if you have not already set up your Vagrant environment.

Behave tests will setup networks of peers with different security and consensus configurations and verify that transactions run properly. To run these tests

cd $GOPATH/src/github.com/hyperledger/fabric
make behave

Some of the Behave tests run inside Docker containers. If a test fails and you want to have the logs from the Docker containers, run the tests with this option:

cd $GOPATH/src/github.com/hyperledger/fabric/bddtests
behave -D logs=Y

Building outside of Vagrant

It is possible to build the project and run peers outside of Vagrant. Generally speaking, one has to ‘translate’ the vagrant setup file to the platform of your choice.

Building on Z

To make building on Z easier and faster, this script is provided (which is similar to the setup file provided for vagrant). This script has been tested only on RHEL 7.2 and has some assumptions one might want to re-visit (firewall settings, development as root user, etc.). It is however sufficient for development in a personally-assigned VM instance.

To get started, from a freshly installed OS:

sudo su
yum install git
mkdir -p $HOME/git/src/github.com/hyperledger
cd $HOME/git/src/github.com/hyperledger
git clone http://gerrit.hyperledger.org/r/fabric
source fabric/devenv/setupRHELonZ.sh

From this point, you can proceed as described above for the Vagrant development environment.

cd $GOPATH/src/github.com/hyperledger/fabric
make peer unit-test behave

Building on Power Platform

Development and build on Power (ppc64le) systems is done outside of vagrant as outlined here. For ease of setting up the dev environment on Ubuntu, invoke this script as root. This script has been validated on Ubuntu 16.04 and assumes certain things (like, development system has OS repositories in place, firewall setting etc) and in general can be improvised further.

To get started on Power server installed with Ubuntu, first ensure you have properly setup your Host’s GOPATH environment variable. Then, execute the following commands to build the fabric code:

mkdir -p $GOPATH/src/github.com/hyperledger
cd $GOPATH/src/github.com/hyperledger
git clone http://gerrit.hyperledger.org/r/fabric
sudo ./fabric/devenv/setupUbuntuOnPPC64le.sh
cd $GOPATH/src/github.com/hyperledger/fabric
make dist-clean all

Configuration

Configuration utilizes the viper and cobra libraries.

There is a core.yaml file that contains the configuration for the peer process. Many of the configuration settings can be overridden on the command line by setting ENV variables that match the configuration setting, but by prefixing with ‘CORE_’. For example, logging level manipulation through the environment is shown below:

CORE_PEER_LOGGING_LEVEL=CRITICAL peer

Requesting a Linux Foundation Account

Contributions to the Hyperledger Fabric code base require a Linux Foundation account. Follow the steps below to create a Linux Foundation account.

Creating a Linux Foundation ID

  1. Go to the Linux Foundation ID website.
  2. Select the option I need to create a Linux Foundation ID.
  3. Fill out the form that appears:
  4. Open your email account and look for a message with the subject line: “Validate your Linux Foundation ID email”.
  5. Open the received URL to validate your email address.
  6. Verify the browser displays the message You have successfully    validated your e-mail address.
  7. Access Gerrit by selecting Sign In:
  8. Use your Linux Foundation ID to Sign In:

Configuring Gerrit to Use SSH

Gerrit uses SSH to interact with your Git client. A SSH private key needs to be generated on the development machine with a matching public key on the Gerrit server.

If you already have a SSH key-pair, skip this section.

As an example, we provide the steps to generate the SSH key-pair on a Linux environment. Follow the equivalent steps on your OS.

  1. Create a key-pair, enter:
ssh-keygen -t rsa -C "John Doe john.doe@example.com"

Note: This will ask you for a password to protect the private key as it generates a unique key. Please keep this password private, and DO NOT enter a blank password.

The generated key-pair is found in: ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub.

  1. Add the private key in the id_rsa file in your key ring, e.g.:
ssh-add ~/.ssh/id_rsa

Once the key-pair has been generated, the public key must be added to Gerrit.

Follow these steps to add your public key id_rsa.pub to the Gerrit account:

  1. Go to Gerrit.
  2. Click on your account name in the upper right corner.
  3. From the pop-up menu, select Settings.
  4. On the left side menu, click on SSH Public Keys.
  5. Paste the contents of your public key ~/.ssh/id_rsa.pub and click Add key.

Note: The id_rsa.pub file can be opened with any text editor. Ensure that all the contents of the file are selected, copied and pasted into the Add SSH key window in Gerrit.

Note: The ssh key generation instructions operate on the assumtion that you are using the default naming. It is possible to generate multiple ssh Keys and to name the resulting files differently. See the ssh-keygen documentation for details on how to do that. Once you have generated non-default keys, you need to configure ssh to use the correct key for Gerrit. In that case, you need to create a ~/.ssh/config file modeled after the one below.

host gerrit.hyperledger.org
 HostName gerrit.hyperledger.org
 IdentityFile ~/.ssh/id_rsa_hyperledger_gerrit
 User <LFID>

where is your Linux Foundation ID and the value of IdentityFile is the name of the public key file you generated.

Warning: Potential Security Risk! Do not copy your private key ~/.ssh/id_rsa Use only the public ~/.ssh/id_rsa.pub.

Checking Out the Source Code

  1. Ensure that SSH has been set up properly. See Configuring Gerrit to Use SSH for details.
  2. Clone the repository with your Linux Foundation ID ():
git clone ssh://<LFID>@gerrit.hyperledger.org:29418/fabric fabric

You have successfully checked out a copy of the source code to your local machine.

Working with Gerrit

Follow these instructions to collaborate on Hyperledger Fabric through the Gerrit review system.

Please be sure that you are subscribed to the mailing list and of course, you can reach out on chat if you need help.

Gerrit assigns the following roles to users:

  • Submitters: May submit changes for consideration, review other code changes, and make recommendations for acceptance or rejection by voting +1 or -1, respectively.
  • Maintainers: May approve or reject changes based upon feedback from reviewers voting +2 or -2, respectively.
  • Builders: (e.g. Jenkins) May use the build automation infrastructure to verify the change.

Maintainers should be familiar with the review process. However, anyone is welcome to (and encouraged!) review changes, and hence may find that document of value.

Git-review

There’s a very useful tool for working with Gerrit called git-review. This command-line tool can automate most of the ensuing sections for you. Of course, reading the information below is also highly recommended so that you understand what’s going on behind the scenes.

Getting deeper into Gerrit

A comprehensive walk-through of Gerrit is beyond the scope of this document. There are plenty of resources available on the Internet. A good summary can be found here. We have also provided a set of Best Practices that you may find helpful.

Working with a local clone of the repository

To work on something, whether a new feature or a bugfix:

  1. Open the Gerrit Projects page
  2. Select the project you wish to work on.
  3. Open a terminal window and clone the project locally using the Clone with git hook URL. Be sure that ssh is also selected, as this will make authentication much simpler:
git clone ssh://LFID@gerrit.hyperledger.org:29418/fabric && scp -p -P 29418 LFID@gerrit.hyperledger.org:hooks/commit-msg fabric/.git/hooks/

注解

If you are cloning the fabric project repository, you will want to clone it to the $GOPATH/src/github.com/hyperledger directory so that it will build, and so that you can use it with the Vagrant development environment.

  1. Create a descriptively-named branch off of your cloned repository
cd fabric
git checkout -b issue-nnnn
  1. Commit your code. For an in-depth discussion of creating an effective commit, please read this document on submitting changes.
git commit -s -a

Then input precise and readable commit msg and submit.

  1. Any code changes that affect documentation should be accompanied by corresponding changes (or additions) to the documentation and tests. This will ensure that if the merged PR is reversed, all traces of the change will be reversed as well.

Submitting a Change

Currently, Gerrit is the only method to submit a change for review.

注解

Please review the guidelines for making and submitting a change.

Using git review

注解

if you prefer, you can use the git-review tool instead of the following. e.g.

Add the following section to .git/config, and replace <USERNAME> with your gerrit id.

[remote "gerrit"]
    url = ssh://<USERNAME>@gerrit.hyperledger.org:29418/fabric.git
    fetch = +refs/heads/*:refs/remotes/gerrit/*

Then submit your change with git review.

$ cd <your code dir>
$ git review

When you update your patch, you can commit with git commit --amend, and then repeat the git review command.

Not using git review

See the directions for building the source code.

When a change is ready for submission, Gerrit requires that the change be pushed to a special branch. The name of this special branch contains a reference to the final branch where the code should reside, once accepted.

For the Hyperledger Fabric repository, the special branch is called refs/for/master.

To push the current local development branch to the gerrit server, open a terminal window at the root of your cloned repository:

cd <your clone dir>
git push origin HEAD:refs/for/master

If the command executes correctly, the output should look similar to this:

Counting objects: 3, done.
Writing objects: 100% (3/3), 306 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: Processing changes: new: 1, refs: 1, done
remote:
remote: New Changes:
remote:   https://gerrit.hyperledger.org/r/6 Test commit
remote:
To ssh://LFID@gerrit.hyperledger.org:29418/fabric
* [new branch]      HEAD -> refs/for/master

The gerrit server generates a link where the change can be tracked.

Reviewing Using Gerrit

  • Add: This button allows the change submitter to manually add names of people who should review a change; start typing a name and the system will auto-complete based on the list of people registered and with access to the system. They will be notified by email that you are requesting their input.
  • Abandon: This button is available to the submitter only; it allows a committer to abandon a change and remove it from the merge queue.
  • Change-ID: This ID is generated by Gerrit (or system). It becomes useful when the review process determines that your commit(s) have to be amended. You may submit a new version; and if the same Change-ID header (and value) are present, Gerrit will remember it and present it as another version of the same change.
  • Status: Currently, the example change is in review status, as indicated by “Needs Verified” in the upper-left corner. The list of Reviewers will all emit their opinion, voting +1 if they agree to the merge, -1 if they disagree. Gerrit users with a Maintainer role can agree to the merge or refuse it by voting +2 or -2 respectively.

Notifications are sent to the email address in your commit message’s Signed-off-by line. Visit your Gerrit dashboard, to check the progress of your requests.

The history tab in Gerrit will show you the in-line comments and the author of the review.

Viewing Pending Changes

Find all pending changes by clicking on the All --> Changes link in the upper-left corner, or open this link.

If you collaborate in multiple projects, you may wish to limit searching to the specific branch through the search bar in the upper-right side.

Add the filter project:fabric to limit the visible changes to only those from Hyperledger Fabric.

List all current changes you submitted, or list just those changes in need of your input by clicking on My --> Changes or open this link

Submitting a Change to Gerrit

Carefully review the following before submitting a change. These guidelines apply to developers that are new to open source, as well as to experienced open source developers.

Change Requirements

This section contains guidelines for submitting code changes for review. For more information on how to submit a change using Gerrit, please see Gerrit.

Changes are submitted as Git commits. Each commit must contain:

  • a short and descriptive subject line that is 72 characters or fewer, followed by a blank line.
  • a change description with your logic or reasoning for the changes, followed by a blank line
  • a Signed-off-by line, followed by a colon (Signed-off-by:)
  • a Change-Id identifier line, followed by a colon (Change-Id:). Gerrit won’t accept patches without this identifier.

A commit with the above details is considered well-formed.

All changes and topics sent to Gerrit must be well-formed. Informationally, commit messages must include:

  • what the change does,
  • why you chose that approach, and
  • how you know it works – for example, which tests you ran.

Commits must build cleanly when applied on top of each other, thus avoiding breaking bisectability. Each commit must address a single identifiable issue and must be logically self-contained.

For example: One commit fixes whitespace issues, another renames a function and a third one changes the code’s functionality. An example commit file is illustrated below in detail:

[FAB-XXXX] A short description of your change with no period at the end

You can add more details here in several paragraphs, but please keep each line
width less than 80 characters. A bug fix should include the issue number.

Change-Id: IF7b6ac513b2eca5f2bab9728ebd8b7e504d3cebe1
Signed-off-by: Your Name <commit-sender@email.address>

Include the issue ID in the one line description of your commit message for readability. Gerrit will link issue IDs automatically to the corresponding entry in Jira.

Each commit must also contain the following line at the bottom of the commit message:

Signed-off-by: Your Name <your@email.address>

The name in the Signed-off-by line and your email must match the change authorship information. Make sure your :file:.git/config is set up correctly. Always submit the full set of changes via Gerrit.

When a change is included in the set to enable other changes, but it will not be part of the final set, please let the reviewers know this.

Check that your change request is validated by the CI process

To ensure stability of the code and limit possible regressions, we use a Continuous Integration (CI) process based on Jenkins which triggers a build on several platforms and runs tests against every change request being submitted. It is your responsibility to check that your CR passes these tests. No CR will ever be merged if it fails the tests and you shouldn’t expect anybody to pay attention to your CRs until they pass the CI tests.

To check on the status of the CI process, simply look at your CR on Gerrit, following the URL that was given to you as the result of the push in the previous step. The History section at the bottom of the page will display a set of actions taken by “Hyperledger Jobbuilder” corresponding to the CI process being executed.

Upon completion, “Hyperledger Jobbuilder” will add to the CR a +1 vote if successful and a -1 vote otherwise.

In case of failure, explore the logs linked from the CR History. If you spot a problem with your CR amend your commit and push it to update it. The CI process will kick off again.

If you see nothing wrong with your CR it might be that the CI process simply failed for some reason unrelated to your change. In that case you may want to restart the CI process by posting a reply to your CR with the simple content “reverify”. Check the CI management page for additional information and options on this.

Reviewing a Change

  1. Click on a link for incoming or outgoing review.
  2. The details of the change and its current status are loaded:
  • Status: Displays the current status of the change. In the example below, the status reads: Needs Verified.
  • Reply: Click on this button after reviewing to add a final review message and a score, -1, 0 or +1.
  • Patch Sets: If multiple revisions of a patch exist, this button enables navigation among revisions to see the changes. By default, the most recent revision is presented.
  • Download: This button brings up another window with multiple options to download or checkout the current changeset. The button on the right copies the line to your clipboard. You can easily paste it into your git interface to work with the patch as you prefer.

Underneath the commit information, the files that have been changed by this patch are displayed.

  1. Click on a filename to review it. Select the code base to differentiate against. The default is Base and it will generally be what is needed.
  2. The review page presents the changes made to the file. At the top of the review, the presentation shows some general navigation options. Navigate through the patch set using the arrows on the top right corner. It is possible to go to the previous or next file in the set or to return to the main change screen. Click on the yellow sticky pad to add comments to the whole file.

The focus of the page is on the comparison window. The changes made are presented in green on the right versus the base version on the left. Double click to highlight the text within the actual change to provide feedback on a specific section of the code. Press c once the code is highlighted to add comments to that section.

  1. After adding the comment, it is saved as a Draft.
  2. Once you have reviewed all files and provided feedback, click the green up arrow at the top right to return to the main change page. Click the Reply button, write some final comments, and submit your score for the patch set. Click Post to submit the review of each reviewed file, as well as your final comment and score. Gerrit sends an email to the change-submitter and all listed reviewers. Finally, it logs the review for future reference. All individual comments are saved as Draft until the Post button is clicked.

测试(Testing)

单元测试(Unit test)

查看 building Hyperledger Fabric 获得单元测试命令。

查看 单元测试覆盖报告

为了查看一个包或者所有子包的覆盖率,替换 -cover 执行单元测试

go test ./... -cover

为了查看一个包中哪一行没有被覆盖,用源代码生成一个带有覆盖率注释的html报告

go test -coverprofile=coverage.out
go tool cover -html=coverage.out -o coverage.html

系统测试(System test)

[WIP] ...即将到来

这个主题旨在包含推荐的测试场景,以及当前针对各种配置的性能数据

Coding guidelines

Coding Golang

We code in Go™ and strictly follow the best practices and will not accept any deviations. You must run the following tools against your Go code and fix all errors and warnings: - golint - go vet - goimports

Generating gRPC code

If you modify any .proto files, run the following command to generate/update the respective .pb.go files.

cd $GOPATH/src/github.com/hyperledger/fabric
make protos

Adding or updating Go packages

Hyperledger Fabric uses Govendor for package management. This means that all required packages reside in the $GOPATH/src/github.com/hyperledger/fabric/vendor folder. Go will use packages in this folder instead of the GOPATH when the go install or go build commands are executed. To manage the packages in the vendor folder, we use Govendor, which is installed in the Vagrant environment. The following commands can be used for package management:

# Add external packages.
govendor add +external

# Add a specific package.
govendor add github.com/kardianos/osext

# Update vendor packages.
govendor update +vendor

# Revert back to normal GOPATH packages.
govendor remove +vendor

# List package.
govendor list

Install prerequisites

Before we begin, if you haven’t already done so, you may wish to check that you have all the prerequisites installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.

Getting a Linux Foundation account

In order to participate in the development of the Hyperledger Fabric project, you will need a Linux Foundation account. You will need to use your LF ID to access to all the Hyperledger community development tools, including Gerrit, Jira and the Wiki (for editing, only).

Getting help

If you are looking for something to work on, or need some expert assistance in debugging a problem or working out a fix to an issue, our community is always eager to help. We hang out on Chat, IRC (#hyperledger on freenode.net) and the mailing lists. Most of us don’t bite :grin: and will be glad to help. The only silly question is the one you don’t ask. Questions are in fact a great way to help improve the project as they highlight where our documentation could be clearer.

Reporting bugs

If you are a user and you have found a bug, please submit an issue using JIRA. Before you create a new JIRA issue, please try to search the existing items to be sure no one else has previously reported it. If it has been previously reported, then you might add a comment that you also are interested in seeing the defect fixed.

注解

If the defect is security-related, please follow the Hyperledger security bug reporting process <https://wiki.hyperledger.org/security/bug-handling-process>.

If it has not been previously reported, create a new JIRA. Please try to provide sufficient information for someone else to reproduce the issue. One of the project’s maintainers should respond to your issue within 24 hours. If not, please bump the issue with a comment and request that it be reviewed. You can also post to the relevant Hyperledger Fabric channel in Hyperledger Rocket Chat. For example, a doc bug should be broadcast to #fabric-documentation, a database bug to #fabric-ledger, and so on...

Submitting your fix

If you just submitted a JIRA for a bug you’ve discovered, and would like to provide a fix, we would welcome that gladly! Please assign the JIRA issue to yourself, then you can submit a change request (CR).

注解

If you need help with submitting your first CR, we have created a brief tutorial for you.

Fixing issues and working stories

Review the issues list and find something that interests you. You could also check the “help-wanted” list. It is wise to start with something relatively straight forward and achievable, and that no one is already assigned. If no one is assigned, then assign the issue to yourself. Please be considerate and rescind the assignment if you cannot finish in a reasonable time, or add a comment saying that you are still actively working the issue if you need a little more time.

Reviewing submitted Change Requests (CRs)

Another way to contribute and learn about Hyperledgr Fabric is to help the maintainers with the review of the CRs that are open. Indeed maintainers have the difficult role of having to review all the CRs that are being submitted and evaluate whether they should be merged or not. You can review the code and/or documentation changes, test the changes, and tell the submitters and maintainers what you think. Once your review and/or test is complete just reply to the CR with your findings, by adding comments and/or voting. A comment saying something like “I tried it on system X and it works” or possibly “I got an error on system X: xxx ” will help the maintainers in their evaluation. As a result, maintainers will be able to process CRs faster and everybody will gain from it.

Just browse through the open CRs on Gerrit to get started.

Making Feature/Enhancement Proposals

Review JIRA. to be sure that there isn’t already an open (or recently closed) proposal for the same function. If there isn’t, to make a proposal we recommend that you open a JIRA Epic, Story or Improvement, whichever seems to best fit the circumstance and link or inline a “one pager” of the proposal that states what the feature would do and, if possible, how it might be implemented. It would help also to make a case for why the feature should be added, such as identifying specific use case(s) for which the feature is needed and a case for what the benefit would be should the feature be implemented. Once the JIRA issue is created, and the “one pager” either attached, inlined in the description field, or a link to a publicly accessible document is added to the description, send an introductory email to the hyperledger-fabric@lists.hyperledger.org mailing list linking the JIRA issue, and soliciting feedback.

Discussion of the proposed feature should be conducted in the JIRA issue itself, so that we have a consistent pattern within our community as to where to find design discussion.

Getting the support of three or more of the Hyperledger Fabric maintainers for the new feature will greatly enhance the probability that the feature’s related CRs will be merged.

Setting up development environment

Next, try building the project in your local development environment to ensure that everything is set up correctly.

What makes a good change request?

  • One change at a time. Not five, not three, not ten. One and only one. Why? Because it limits the blast area of the change. If we have a regression, it is much easier to identify the culprit commit than if we have some composite change that impacts more of the code.
  • Include a link to the JIRA story for the change. Why? Because a) we want to track our velocity to better judge what we think we can deliver and when and b) because we can justify the change more effectively. In many cases, there should be some discussion around a proposed change and we want to link back to that from the change itself.
  • Include unit and integration tests (or changes to existing tests) with every change. This does not mean just happy path testing, either. It also means negative testing of any defensive code that it correctly catches input errors. When you write code, you are responsible to test it and provide the tests that demonstrate that your change does what it claims. Why? Because without this we have no clue whether our current code base actually works.
  • Unit tests should have NO external dependencies. You should be able to run unit tests in place with go test or equivalent for the language. Any test that requires some external dependency (e.g. needs to be scripted to run another component) needs appropriate mocking. Anything else is not unit testing, it is integration testing by definition. Why? Because many open source developers do Test Driven Development. They place a watch on the directory that invokes the tests automagically as the code is changed. This is far more efficient than having to run a whole build between code changes. See this definition of unit testing for a good set of criteria to keep in mind for writing effective unit tests.
  • Minimize the lines of code per CR. Why? Maintainers have day jobs, too. If you send a 1,000 or 2,000 LOC change, how long do you think it takes to review all of that code? Keep your changes to < 200-300 LOC, if possible. If you have a larger change, decompose it into multiple independent changess. If you are adding a bunch of new functions to fulfill the requirements of a new capability, add them separately with their tests, and then write the code that uses them to deliver the capability. Of course, there are always exceptions. If you add a small change and then add 300 LOC of tests, you will be forgiven;-) If you need to make a change that has broad impact or a bunch of generated code (protobufs, etc.). Again, there can be exceptions.

注解

Large change requests, e.g. those with more than 300 LOC are more likely than not going to receive a -2, and you’ll be asked to refactor the change to conform with this guidance.

  • Do not stack change requests (e.g. submit a CR from the same local branch as your previous CR) unless they are related. This will minimize merge conflicts and allow changes to be merged more quickly. If you stack requests your subsequent requests may be held up because of review comments in the preceding requests.
  • Write a meaningful commit message. Include a meaningful 50 (or less) character title, followed by a blank line, followed by a more comprehensive description of the change. Each change MUST include the JIRA identifier corresponding to the change (e.g. [FAB-1234]). This can be in the title but should also be in the body of the commit message. See the complete requirements for an acceptable change request.

注解

That Gerrit will automatically create a hyperlink to the JIRA item. e.g.

[FAB-1234] fix foobar() panic

Fix [FAB-1234] added a check to ensure that when foobar(foo string)
is called, that there is a non-empty string argument.

Finally, be responsive. Don’t let a change request fester with review comments such that it gets to a point that it requires a rebase. It only further delays getting it merged and adds more work for you - to remediate the merge conflicts.

Communication

We use RocketChat for communication and Google Hangouts™ for screen sharing between developers. Our development planning and prioritization is done in JIRA, and we take longer running discussions/decisions to the mailing list.

Maintainers

The project’s maintainers are responsible for reviewing and merging all patches submitted for review and they guide the over-all technical direction of the project within the guidelines established by the Hyperledger Technical Steering Committee (TSC).

Becoming a maintainer

This project is managed under an open governance model as described in our charter. Projects or sub-projects will be lead by a set of maintainers. New sub-projects can designate an initial set of maintainers that will be approved by the top-level project’s existing maintainers when the project is first approved. The project’s maintainers will, from time-to-time, consider adding or removing a maintainer. An existing maintainer can submit a change set to the MAINTAINERS.rst file. A nominated Contributor may become a Maintainer by a majority approval of the proposal by the existing Maintainers. Once approved, the change set is then merged and the individual is added to (or alternatively, removed from) the maintainers group. Maintainers may be removed by explicit resignation, for prolonged inactivity (3 or more months), or for some infraction of the code of conduct or by consistently demonstrating poor judgement. A maintainer removed for inactivity should be restored following a sustained resumption of contributions and reviews (a month or more) demonstrating a renewed commitment to the project.

Needs Review

Glossary - 词汇表

Terminology is important, so that all Hyperledger Fabric users and developers agree on what we mean by each specific term. What is chaincode, for example. The documentation will reference the glossary as needed, but feel free to read the entire thing in one sitting if you like; it’s pretty enlightening!

专业术语很重要,所以全体“超级账本Fabric”项目的用户和开发人员,都同意我们所说的 每个特定术语的含义,比如什么是链码。该文档将会按需引用这些术语,如果你愿意的话 可以随意阅读整个文档,这会非常有启发!

Anchor Peer - 锚节点

A peer node on a channel that all other peers can discover and communicate with. Each Member on a channel has an anchor peer (or multiple anchor peers to prevent single point of failure), allowing for peers belonging to different Members to discover all existing peers on a channel.

锚节点是通道中能被所有对等节点探测、并能与之进行通信的一种对等节点。通道中的每个 “成员 Member”都有一个(或多个,以防单点故障)锚节点,允许属于不同成员身份的节点 来发现通道中存在的其它节点。

Block - 区块

An ordered set of transactions that is cryptographically linked to the preceding block(s) on a channel.

区块是通道上一组有序交易的集合,通过密码学手段(哈希加密)连接到前导区块。

Zhu Jiang:区块是一组有序的交易集合,在通道中经过加密(哈希加密)后与前序区块连接。

Chain - 链

The ledger’s chain is a transaction log structured as hash-linked blocks of transactions. Peers receive blocks of transactions from the ordering service, mark the block’s transactions as valid or invalid based on endorsement policies and concurrency violations, and append the block to the hash chain on the peer’s file system.

链就是区块之间以哈希连接为结构的交易日志。对等节点从排序服务节点接收交易区块,并根据 背书策略和并发冲突标记区块上的交易是否有效,然后将该区块追加到节点文件系统中的哈希链 上。

Zhu Jiang:账本的链是一个交易区块经过“哈希连接”结构化的交易日志。对等节点从排序服务收 到交易区块,基于背书策略和并发冲突来标注区块的交易为有效或者无效状态,并且将区块追加 到对等节点文件系统的哈希链中。

Chaincode - 链码

Chaincode is software, running on a ledger, to encode assets and the transaction instructions (business logic) for modifying the assets.

链码是一个运行在账本上的软件,它可以对资产进行编码,其中的交易指令(或者叫业务逻辑) 也可以用来修改资产。

Channel - 通道

A channel is a private blockchain overlay which allows for data isolation and confidentiality. A channel-specific ledger is shared across the peers in the channel, and transacting parties must be properly authenticated to a channel in order to interact with it. Channels are defined by a Configuration-Block.

通道是基于数据隔离和保密构建的一个私有区块链。特定通道的账本在该通道中的所有节点共享, 交易方必须通过该通道的正确验证才能与账本进行交互。通道是由一个 “配置块 Configuration-Block”来定义的。

Commitment - 提交

Each Peer on a channel validates ordered blocks of transactions and then commits (writes/appends) the blocks to its replica of the channel Ledger. Peers also mark each transaction in each block as valid or invalid.

一个通道中的每个“对等节点 Peer”都会验证交易的有序区块,然后将区块提交(写或追加) 至该通道上“账本 Ledger”的各个副本。对等节点也会标记每个区块中的每笔交易的状态是有 效或者无效。

Concurrency Control Version Check - 并发控制版本检查(CCVC)

Concurrency Control Version Check is a method of keeping state in sync across peers on a channel. Peers execute transactions in parallel, and before commitment to the ledger, peers check that the data read at execution time has not changed. If the data read for the transaction has changed between execution time and commitment time, then a Concurrency Control Version Check violation has occurred, and the transaction is marked as invalid on the ledger and values are not updated in the state database.

CCVC是保持通道中各对等节点间状态同步的一种方法。对等节点并行的执行交易,在交易提交至 账本之前,对等节点会检查交易在执行期间读到的数据是否被修改。如果读取的数据在执行和提 交之间被改变,就会引发CCVC冲突,该交易就会在账本中被标记为无效,而且值不会更新到状态 数据库中。

Configuration Block - 配置区块

Contains the configuration data defining members and policies for a system chain (ordering service) or channel. Any configuration modifications to a channel or overall network (e.g. a member leaving or joining) will result in a new configuration block being appended to the appropriate chain. This block will contain the contents of the genesis block, plus the delta.

包含为系统链(排序服务)或通道定义成员和策略的配置数据。对某个通道或整个网络的配置修 改(比如,成员离开或加入)都将导致生成一个新的配置区块并追加到适当的链上。这个配置区 块会包含创始区块的内容加上增量。

Consensus - 共识

A broader term overarching the entire transactional flow, which serves to generate an agreement on the order and to confirm the correctness of the set of transactions constituting a block.

共识是贯穿整个交易流程的广义术语,其用于产生一个对于排序的同意书和确认构成区块的交易 集的正确性。

Current State - 当前状态:

The current state of the ledger represents the latest values for all keys ever included in its chain transaction log. Peers commit the latest values to ledger current state for each valid transaction included in a processed block. Since current state represents all latest key values known to the channel, it is sometimes referred to as World State. Chaincode executes transaction proposals against current state data.

账本当前状态表示其链上交易日志中所有键名的最新值。节点会将处理过的区块中的每个交易对 应的修改值提交到账本的当前状态,由于当前状态表示通道里现有键名的最新值,所以当前状态 也被称为世界观(World State)。链码执行交易提案就是针对的当前状态数据。

Dynamic Membership - 动态成员

Hyperledger Fabric supports the addition/removal of members, peers, and ordering service nodes, without compromising the operationality of the overall network. Dynamic membership is critical when business relationships adjust and entities need to be added/removed for various reasons.

超级账本Fabric支持动态的添加或移除:成员、对等节点、排序服务节点,而不影响整个网络的 操作性。当业务关系调整或因各种原因需添加/移除实体时,动态成员至关重要。

Endorsement - 背书

Refers to the process where specific peer nodes execute a chaincode transaction and return a proposal response to the client application. The proposal response includes the chaincode execution response message, results (read set and write set), and events, as well as a signature to serve as proof of the peer’s chaincode execution. Chaincode applications have corresponding endorsement policies, in which the endorsing peers are specified.

背书是指特定节点执行一个链码交易并返回一个提案响应给客户端应用的过程。提案响应包含链 码执行后返回的消息,结果(读写集)和事件,同时也包含证明该节点执行链码的签名。 链码应用具有相应的背书策略,其中指定了背书节点。

Endorsement policy - 背书策略

Defines the peer nodes on a channel that must execute transactions attached to a specific chaincode application, and the required combination of responses (endorsements). A policy could require that a transaction be endorsed by a minimum number of endorsing peers, a minimum percentage of endorsing peers, or by all endorsing peers that are assigned to a specific chaincode application. Policies can be curated based on the application and the desired level of resilience against misbehavior (deliberate or not) by the endorsing peers. A transaction that is submitted must satisfy the endorsement policy before being marked as valid by committing peers. A distinct endorsement policy for install and instantiate transactions is also required.

背书策略定义了通道上,依赖于特定链码执行交易的节点,和必要的组合响应(背书)。背书策略 可指定特定链码应用的交易背书节点,以及交易背书的最小参与节点数、百分比,或全部节点。背 书策略可以基于应用程序和节点对于抵御(有意无意)不良行为的期望水平来组织管理。提交的交 易在被执行节点标记成有效前,必须符合背书策略。安装和实例化交易时,也需要一个明确的背书 策略。

Hyperledger Fabric CA - 超级账本Fabric-ca

Hyperledger Fabric CA is the default Certificate Authority component, which issues PKI-based certificates to network member organizations and their users. The CA issues one root certificate (rootCert) to each member and one enrollment certificate (ECert) to each authorized user.

超级账本Fabric CA是默认的认证授权管理组件,它向网络成员组织及其用户颁发基于PKI的证书。 CA为每个成员颁发一个根证书(rootCert),为每个授权用户颁发一个注册证书(ECert)。

Genesis Block - 初始区块

The configuration block that initializes a blockchain network or channel, and also serves as the first block on a chain.

初始区块是初始化区块链网络或通道的配置区块,也是链上的第一个区块。

Gossip Protocol - Gossip协议

The gossip data dissemination protocol performs three functions: 1) manages peer discovery and channel membership; 2) disseminates ledger data across all peers on the channel; 3) syncs ledger state across all peers on the channel. Refer to the Gossip topic for more details.

Gossip数据传输协议有三项功能: 1)管理“节点发现”和“通道成员”; 2)通道上的所有节点间广播账本数据; 3)通道上的所有节点间同步账本数据。 详情参考 Gossip 话题.

Initialize - 初始化

A method to initialize a chaincode application.

一个初始化链码程序的方法。

Install - 安装

The process of placing a chaincode on a peer’s file system.

将链码放到节点文件系统的过程。(译注:即将ChaincodeDeploymentSpec信息存到 chaincodeInstallPath-chaincodeName.chainVersion文件中)

Instantiate - 实例化

The process of starting and initializing a chaincode application on a specific channel. After instantiation, peers that have the chaincode installed can accept chaincode invocations.

在特定通道上启动和初始化链码应用的过程。实例化完成后,装有链码的节点可以接受链码调用。 (译注:在lccc中将链码数据保存到状态中,然后部署并执行Init方法)

Invoke - 调用

Used to call chaincode functions. A client application invokes chaincode by sending a transaction proposal to a peer. The peer will execute the chaincode and return an endorsed proposal response to the client application. The client application will gather enough proposal responses to satisfy an endorsement policy, and will then submit the transaction results for ordering, validation, and commit. The client application may choose not to submit the transaction results. For example if the invoke only queried the ledger, the client application typically would not submit the read-only transaction, unless there is desire to log the read on the ledger for audit purpose. The invoke includes a channel identifier, the chaincode function to invoke, and an array of arguments.

用于调用链码内的函数。客户端应用通过向节点发送交易提案来调用链码。节点会执行链码并向客 户端应用返回一个背书提案。客户端应用会收集充足的提案响应来判断是否符合背书策略,之后再 将交易结果递交到排序、验证和提交。客户端应用可以选择不提交交易结果。比如,调用只查询账 本,通常情况下,客户端应用是不会提交这种只读性交易的,除非基于审计目的,需要记录访问账 本的日志。调用包含了通道标识符,调用的链码函数,以及一个包含参数的数组。

Leading Peer - 主导节点

Each Member can own multiple peers on each channel that it subscribes to. One of these peers is serves as the leading peer for the channel, in order to communicate with the network ordering service on behalf of the member. The ordering service “delivers” blocks to the leading peer(s) on a channel, who then distribute them to other peers within the same member cluster.

每一个“成员 Member”在其订阅的通道上可以拥有多个节点,其中一个节点会作为通道的主导 节点,代表该成员与网络排序服务节点通信。排序服务将区块传递给通道上的主导节点,主导 节点再将此区块分发给同一成员集群下的其他节点。

Ledger - 账本

A ledger is a channel’s chain and current state data which is maintained by each peer on the channel.

账本是通道上的链,以及由通道中每个节点维护的当前状态数据。

Member - 成员

A legally separate entity that owns a unique root certificate for the network. Network components such as peer nodes and application clients will be linked to a member.

拥有网络唯一根证书的合法独立实体。诸如节点和应用客户端这样的网络组件都会关联到一个成员。

Membership Service Provider - 成员服务提供者

The Membership Service Provider (MSP) refers to an abstract component of the system that provides credentials to clients, and peers for them to participate in a Hyperledger Fabric network. Clients use these credentials to authenticate their transactions, and peers use these credentials to authenticate transaction processing results (endorsements). While strongly connected to the transaction processing components of the systems, this interface aims to have membership services components defined, in such a way that alternate implementations of this can be smoothly plugged in without modifying the core of transaction processing components of the system.

成员服务提供者(MSP)是指为客户端和节点加入超级账本Fabric网络,提供证书的系统抽象组件。 客户端用证书来认证他们的交易;节点用证书认证交易处理结果(背书)。该接口与系统的交易处 理组件密切相关,旨在定义成员服务组件,以这种方式可选实现平滑接入,而不用修改系统的交易 处理组件核心。

Membership Services - 成员服务

Membership Services authenticates, authorizes, and manages identities on a permissioned blockchain network. The membership services code that runs in peers and orderers both authenticates and authorizes blockchain operations. It is a PKI-based implementation of the Membership Services Provider (MSP) abstraction.

成员服务在许可的区块链网络上做认证、授权和身份管理。运行于节点和排序服务的成员服务代码均 会参与认证和授权区块链操作。它是基于PKI的抽象成员服务提供者(MSP)的实现。

Ordering Service - 排序服务

A defined collective of nodes that orders transactions into a block. The ordering service exists independent of the peer processes and orders transactions on a first-come-first-serve basis for all channel’s on the network. The ordering service is designed to support pluggable implementations beyond the out-of-the-box SOLO and Kafka varieties. The ordering service is a common binding for the overall network; it contains the cryptographic identity material tied to each Member.

预先定义好的一组节点,将交易排序放入区块。排序服务独立于节点流程之外,并以先到先得的方式 为网络上所有通道做交易排序。交易排序支持可插拔实现,目前默认实现了SOLO和Kafka。排序服务是 整个网络的公用绑定,包含与每个“成员 Member”相关的加密材料。

Peer - 节点

A network entity that maintains a ledger and runs chaincode containers in order to perform read/write operations to the ledger. Peers are owned and maintained by members.

一个网络实体,维护账本并运行链码容器来对账本做读写操作。节点由成员所有,并负责维护。

Policy - 策略

There are policies for endorsement, validation, chaincode management and network/channel management.

各种策略:背书策略,校验策略,链码管理策略,网络管理策略,通道管理策略。

Proposal - 提案

A request for endorsement that is aimed at specific peers on a channel. Each proposal is either an instantiate or an invoke (read/write) request.

一种通道中针对特定节点的背书请求。每个提案要么是链码的实例化,要么是链码的调用(读写)请求。

Query - 查询

A query is a chaincode invocation which reads the ledger current state but does not write to the ledger. The chaincode function may query certain keys on the ledger, or may query for a set of keys on the ledger. Since queries do not change ledger state, the client application will typically not submit these read-only transactions for ordering, validation, and commit. Although not typical, the client application can choose to submit the read-only transaction for ordering, validation, and commit, for example if the client wants auditable proof on the ledger chain that it had knowledge of specific ledger state at a certain point in time.

查询是一个链码调用,只读账本当前状态,不写入账本。链码函数可以查询账本上的特定键名,也可以 查询账本上的一组键名。由于查询不改变账本状态,因此客户端应用通常不会提交这类只读交易做排序、 验证和提交。不过,特殊情况下,客户端应用还是会选择提交只读交易做排序、验证和提交。比如,客 户需要账本链上保留可审计证据,就需要链上保留某一特定时间点的特定账本的状态。

Software Development Kit (SDK) - 软件开发包(SDK)

The Hyperledger Fabric client SDK provides a structured environment of libraries for developers to write and test chaincode applications. The SDK is fully configurable and extensible through a standard interface. Components, including cryptographic algorithms for signatures, logging frameworks and state stores, are easily swapped in and out of the SDK. The SDK provides APIs for transaction processing, membership services, node traversal and event handling. The SDK comes in multiple flavors: Node.js, Java. and Python.

超级账本Fabric客户端软件开发包(SDK)为开发人员提供了一个结构化的库环境,用于编写和测试链码 应用程序。SDK完全可以通过标准接口实现配置和扩展。它的各种组件:签名加密算法、日志框架和状态 存储,都可以轻松地被替换。SDK提供APIs进行交易处理,成员服务、节点遍历以及事件处理。目前SDK 支持Node.js、Java和Python。

State Database - 状态数据库

Current state data is stored in a state database for efficient reads and queries from chaincode. Supported databases include levelDB and couchDB.

为了从链码中高效的读写,当前状态数据存储在状态数据库中。支持的数据库包括levelDB和couchDB。

System Chain - 系统链

Contains a configuration block defining the network at a system level. The system chain lives within the ordering service, and similar to a channel, has an initial configuration containing information such as: MSP information, policies, and configuration details. Any change to the overall network (e.g. a new org joining or a new ordering node being added) will result in a new configuration block being added to the system chain.

一个在系统层面定义网络的配置区块。系统链存在于排序服务中,与通道类似,具有包含以下信息的初始 配置:MSP(成员服务提供者)信息、策略和配置详情。全网中的任何变化(例如新的组织加入或者 新的排序节点加入)将导致新的配置区块被添加到系统链中。

The system chain can be thought of as the common binding for a channel or group of channels. For instance, a collection of financial institutions may form a consortium (represented through the system chain), and then proceed to create channels relative to their aligned and varying business agendas.

系统链可看做是一个或一组通道的公用绑定。例如,金融机构的集合可以形成一个财团(表现为系统链), 然后根据其相同或不同的业务计划创建通道。

Transaction - 交易

Invoke or instantiate results that are submitted for ordering, validation, and commit. Invokes are requests to read/write data from the ledger. Instantiate is a request to start and initialize a chaincode on a channel. Application clients gather invoke or instantiate responses from endorsing peers and package the results and endorsements into a transaction that is submitted for ordering, validation, and commit.

调用或者实例化结果递交到排序、验证和提交。调用是从账本中读/写数据的请求。实例化是在通道中启动 并初始化链码的请求。客户端应用从背书节点收集调用或实例化响应,并将结果和背书打包到交易事务, 即递交到做排序,验证和提交。

Release Notes

v1.1.0 - March 15, 2018

The v1.1 release includes all of the features delivered in v1.1.0-preview and v1.1.0-alpha.

Additionally, there are feature improvements, bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Updated to Go version 1.9.2. Updated baseimage version to 0.4.6.

Known Vulnerabilities

none

Known Issues & Workarounds

The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.1.0-rc1 - March 1, 2018

The v1.1 release candidate 1 (rc1) includes all of the features delivered in v1.1.0-preview and v1.1.0-alpha.

Additionally, there are feature improvements, bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities

none

Resolved Vulnerabilities

none

Known Issues & Workarounds

The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.1.0-alpha - January 25, 2018

This is a feature-complete alpha release of the up-coming 1.1 release. The 1.1 release includes the following new major features:

  • FAB-6911 - Event service for blocks
  • FAB-5481 - Event service for block transaction events
  • FAB-5300 - Certificate Revocation List from CA
  • FAB-3067 - Peer management of CouchDB indexes
  • FAB-6715 - Mutual TLS between all components
  • FAB-5556 - Rolling Upgrade via configured capabilities
  • FAB-2331 - Node.js Chaincode support
  • FAB-5363 - Node.js SDK Connection Profile
  • FAB-830 - Encryption library for chaincode
  • FAB-5346 - Attribute-based Access Control
  • FAB-6089 - Chaincode APIs for creator identity
  • FAB-6421 - Performance improvements

Additionally, there are feature improvements, bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities

none

Resolved Vulnerabilities

none

Known Issues & Workarounds

The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.1.0-preview - November 1, 2017

This is a preview release of the up-coming 1.1 release. We are not feature complete for 1.1 just yet, but we wanted to get the following functionality published to gain some early community feedback on the following features:

  • FAB-2331 - Node.js Chaincode
  • FAB-5363 - Node.js SDK Connection Profile
  • FAB-830 - Encryption library for chaincode
  • FAB-5346 - Attribute-based Access Control
  • FAB-6089 - Chaincode APIs to retrieve creator cert info
  • FAB-6421 - Performance improvements

Additionally, there are the usual bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities

none

Resolved Vulnerabilities

none

Known Issues & Workarounds

The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.0.4 - October 31, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities

none

Resolved Vulnerabilities

none

Known Issues & Workarounds

The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to https://jira.hyperledger.org/browse/FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.0.3 - October 3, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities none

Resolved Vulnerabilities none

Known Issues & Workarounds The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to https://jira.hyperledger.org/browse/FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.0.2 - August 31, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities none

Resolved Vulnerabilities https://jira.hyperledger.org/browse/FAB-5753 https://jira.hyperledger.org/browse/FAB-5899

Known Issues & Workarounds The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to https://jira.hyperledger.org/browse/FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.0.1 - August 5, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities none

Resolved Vulnerabilities https://jira.hyperledger.org/browse/FAB-5329 https://jira.hyperledger.org/browse/FAB-5330 https://jira.hyperledger.org/browse/FAB-5353 https://jira.hyperledger.org/browse/FAB-5529 https://jira.hyperledger.org/browse/FAB-5606 https://jira.hyperledger.org/browse/FAB-5627

Known Issues & Workarounds The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to https://jira.hyperledger.org/browse/FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.0.0 - July 11, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (removal of unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities none

Resolved Vulnerabilities https://jira.hyperledger.org/browse/FAB-5207

Known Issues & Workarounds The fabric-ccenv image which is used to build chaincode, currently includes the github.com/hyperledger/fabric/core/chaincode/shim (“shim”) package. This is convenient, as it provides the ability to package chaincode without the need to include the “shim”. However, this may cause issues in future releases (and/or when trying to use packages which are included by the “shim”).

In order to avoid any issues, users are advised to manually vendor the “shim” package with their chaincode prior to using the peer CLI for packaging and/or for installing chaincode.

Please refer to https://jira.hyperledger.org/browse/FAB-5177 for more details, and kindly be aware that given the above, we may end up changing the fabric-ccenv in the future.

Change Log

v1.0.0-rc1 - June 23, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Known Vulnerabilities none

Resolved Vulnerabilities https://jira.hyperledger.org/browse/FAB-4856 https://jira.hyperledger.org/browse/FAB-4848 https://jira.hyperledger.org/browse/FAB-4751 https://jira.hyperledger.org/browse/FAB-4626 https://jira.hyperledger.org/browse/FAB-4567 https://jira.hyperledger.org/browse/FAB-3715

Known Issues & Workarounds none

Change Log

v1.0.0-beta - June 8, 2017

Bug fixes, documentation and test coverage improvements, UX improvements based on user feedback and changes to address a variety of static scan findings (unused code, static security scanning, spelling, linting and more).

Upgraded to latest version (a precursor to 1.4.0) of gRPC-go and implemented keep-alive feature for improved resiliency.

Added a new tool configtxlator to enable users to translate the contents of a channel configuration transaction into a human readable form.

Known Vulnerabilities

none

Resolved Vulnerabilities

none

Known Issues & Workarounds

BCCSP content in the configtx.yaml has been removed. This change will cause a panic when running configtxgen tool with a configtx.yaml file that includes the removed BCCSP content.

Java Chaincode support has been disabled until post 1.0.0 as it is not yet fully mature. It may be re-enabled for experimentation by cloning the hyperledger/fabric repository, reversing this commit and building your own fork.

Change Log

v1.0.0-alpha2

The second alpha release of the v1.0.0 Hyperledger Fabric. The code is now feature complete. From now until the v1.0.0 release, the community is focused on documentation improvements, testing, hardening, bug fixing and tooling. We will be releasing successive release candidates periodically as the release firms up.

Change Log

v1.0.0-alpha - March 16, 2017

The first alpha release of the v1.0.0 Hyperledger Fabric. The code is being made available to developers to begin exploring the v1.0 architecture.

Change Log

v0.6-preview September 16, 2016

A developer preview release of the Hyperledger Fabric intended to exercise the release logistics and stabilize a set of capabilities for developers to try out. This will be the last release under the original architecture. All subsequent releases will deliver on the v1.0 architecture.

Change Log

v0.5-developer-preview - June 17, 2016

A developer preview release of the Hyperledger Fabric intended to exercise the release logistics and stabilize a set of capabilities for developers to try out.

Key features:

Permissioned blockchain with immediate finality Chaincode (aka smart contract) execution environments Docker container (user chaincode) In-process with peer (system chaincode) Pluggable consensus with PBFT, NOOPS (development mode), SIEVE (prototype) Event framework supports pre-defined and custom events Client SDK (Node.js), basic REST APIs and CLIs Known Key Bugs and work in progress

  • 1895 - Client SDK interfaces may crash if wrong parameter specified
  • 1901 - Slow response after a few hours of stress testing
  • 1911 - Missing peer event listener on the client SDK
  • 889 - The attributes in the TCert are not encrypted. This work is still on-going

Still Have Questions? - 依然遇到问题?

We try to maintain a comprehensive set of documentation for various audiences. However, we realize that often there are questions that remain unanswered. For any technical questions relating to Hyperledger Fabric not answered here, please use StackOverflow. Another approach to getting your questions answered to send an email to the mailing list (hyperledger-fabric@lists.hyperledger.org), or ask your questions on RocketChat (an alternative to Slack) on the #fabric or #fabric-questions channel.

我们试图为各位读者维护一份综合性的文档集合。然后,我们意识到依然还有很多问题没被解答。对于任何 Hyperledger Fabric 的相关问题,如果在本系列文档中未获得解答,请使用 StackOverflow 获取帮助。另一个获取问题答案的方法是,将你的疑问通过 email 发送到 mailing list (hyperledger-fabric@lists.hyperledger.org),或者在 RocketChat (一种 Slack 的替代品) 上的 #fabric 或 #fabric-questions 频道提出你的问题。

注解

Please, when asking about problems you are facing tell us about the environment in which you are experiencing those problems including the OS, which version of Docker you are using, etc.

请注意,当提出问题时,你需要提供你遇到这些问题的环境信息,例如操作系统、你使用的 Docker 版本号等。

Status - 状态

Hyperledger Fabric is in the Active state. For more information on the history of this project see our wiki page. Information on what Active entails can be found in the Hyperledger Project Lifecycle document.

Hyperledger Fabric 处于 活动 状态。 有关本项目更多的历史信息,请参见我们的 wiki页面。 关于活动项目的信息可以在Hyperledger 项目生命周期文档 中找到。