`
wbj0110
  • 浏览: 1545771 次
  • 性别: Icon_minigender_1
  • 来自: 上海
文章分类
社区版块
存档分类
最新评论

Distributed RPC

阅读更多

Distributed RPC

The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls.

DRPC is not so much a feature of Storm as it is a pattern expressed from Storm's primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it's so useful that it's bundled with Storm.

High level overview

Distributed RPC is coordinated by a "DRPC server" (Storm comes packaged with an implementation of this). The DRPC server coordinates receiving an RPC request, sending the request to the Storm topology, receiving the results from the Storm topology, and sending the results back to the waiting client. From a client's perspective, a distributed RPC call looks just like a regular RPC call. For example, here's how a client would compute the results for the "reach" function with the argument "http://twitter.com":

DRPCClient client = new DRPCClient("drpc-host", 3772);
String result = client.execute("reach", "http://twitter.com");

The distributed RPC workflow looks like this:

Tasks in a topology

A client sends the DRPC server the name of the function to execute and the arguments to that function. The topology implementing that function uses a DRPCSpout to receive a function invocation stream from the DRPC server. Each function invocation is tagged with a unique id by the DRPC server. The topology then computes the result and at the end of the topology a bolt called ReturnResults connects to the DRPC server and gives it the result for the function invocation id. The DRPC server then uses the id to match up that result with which client is waiting, unblocks the waiting client, and sends it the result.

LinearDRPCTopologyBuilder

Storm comes with a topology builder called LinearDRPCTopologyBuilder that automates almost all the steps involved for doing DRPC. These include:

  1. Setting up the spout
  2. Returning the results to the DRPC server
  3. Providing functionality to bolts for doing finite aggregations over groups of tuples

Let's look at a simple example. Here's the implementation of a DRPC topology that returns its input argument with a "!" appended:

public static class ExclaimBolt extends BaseBasicBolt {
    public void execute(Tuple tuple, BasicOutputCollector collector) {
        String input = tuple.getString(1);
        collector.emit(new Values(tuple.getValue(0), input + "!"));
    }

    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("id", "result"));
    }
}

public static void main(String[] args) throws Exception {
    LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("exclamation");
    builder.addBolt(new ExclaimBolt(), 3);
    // ...
}

As you can see, there's very little to it. When creating the LinearDRPCTopologyBuilder, you tell it the name of the DRPC function for the topology. A single DRPC server can coordinate many functions, and the function name distinguishes the functions from one another. The first bolt you declare will take in as input 2-tuples, where the first field is the request id and the second field is the arguments for that request.LinearDRPCTopologyBuilder expects the last bolt to emit an output stream containing 2-tuples of the form [id, result]. Finally, all intermediate tuples must contain the request id as the first field.

In this example, ExclaimBolt simply appends a "!" to the second field of the tuple. LinearDRPCTopologyBuilder handles the rest of the coordination of connecting to the DRPC server and sending results back.

Local mode DRPC

DRPC can be run in local mode. Here's how to run the above example in local mode:

LocalDRPC drpc = new LocalDRPC();
LocalCluster cluster = new LocalCluster();

cluster.submitTopology("drpc-demo", conf, builder.createLocalTopology(drpc));

System.out.println("Results for 'hello':" + drpc.execute("exclamation", "hello"));

cluster.shutdown();
drpc.shutdown();

First you create a LocalDRPC object. This object simulates a DRPC server in process, just like how LocalCluster simulates a Storm cluster in process. Then you create the LocalCluster to run the topology in local mode. LinearDRPCTopologyBuilder has separate methods for creating local topologies and remote topologies. In local mode the LocalDRPC object does not bind to any ports so the topology needs to know about the object to communicate with it. This is why createLocalTopology takes in the LocalDRPC object as input.

After launching the topology, you can do DRPC invocations using the execute method on LocalDRPC.

Remote mode DRPC

Using DRPC on an actual cluster is also straightforward. There's three steps:

  1. Launch DRPC server(s)
  2. Configure the locations of the DRPC servers
  3. Submit DRPC topologies to Storm cluster

Launching a DRPC server can be done with the storm script and is just like launching Nimbus or the UI:

bin/storm drpc

Next, you need to configure your Storm cluster to know the locations of the DRPC server(s). This is how DRPCSpout knows from where to read function invocations. This can be done through the storm.yaml file or the topology configurations. Configuring this through thestorm.yaml looks something like this:

drpc.servers:
  - "drpc1.foo.com"
  - "drpc2.foo.com"

Finally, you launch DRPC topologies using StormSubmitter just like you launch any other topology. To run the above example in remote mode, you do something like this:

StormSubmitter.submitTopology("exclamation-drpc", conf, builder.createRemoteTopology());

createRemoteTopology is used to create topologies suitable for Storm clusters.

A more complex example

The exclamation DRPC example was a toy example for illustrating the concepts of DRPC. Let's look at a more complex example which really needs the parallelism a Storm cluster provides for computing the DRPC function. The example we'll look at is computing the reach of a URL on Twitter.

The reach of a URL is the number of unique people exposed to a URL on Twitter. To compute reach, you need to:

  1. Get all the people who tweeted the URL
  2. Get all the followers of all those people
  3. Unique the set of followers
  4. Count the unique set of followers

A single reach computation can involve thousands of database calls and tens of millions of follower records during the computation. It's a really, really intense computation. As you're about to see, implementing this function on top of Storm is dead simple. On a single machine, reach can take minutes to compute; on a Storm cluster, you can compute reach for even the hardest URLs in a couple seconds.

A sample reach topology is defined in storm-starter here. Here's how you define the reach topology:

LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("reach");
builder.addBolt(new GetTweeters(), 3);
builder.addBolt(new GetFollowers(), 12)
        .shuffleGrouping();
builder.addBolt(new PartialUniquer(), 6)
        .fieldsGrouping(new Fields("id", "follower"));
builder.addBolt(new CountAggregator(), 2)
        .fieldsGrouping(new Fields("id"));

The topology executes as four steps:

  1. GetTweeters gets the users who tweeted the URL. It transforms an input stream of [id, url] into an output stream of[id, tweeter]. Each url tuple will map to many tweeter tuples.
  2. GetFollowers gets the followers for the tweeters. It transforms an input stream of [id, tweeter] into an output stream of[id, follower]. Across all the tasks, there may of course be duplication of follower tuples when someone follows multiple people who tweeted the same URL.
  3. PartialUniquer groups the followers stream by the follower id. This has the effect of the same follower going to the same task. So each task of PartialUniquer will receive mutually independent sets of followers. Once PartialUniquer receives all the follower tuples directed at it for the request id, it emits the unique count of its subset of followers.
  4. Finally, CountAggregator receives the partial counts from each of the PartialUniquer tasks and sums them up to complete the reach computation.

Let's take a look at the PartialUniquer bolt:

public class PartialUniquer extends BaseBatchBolt {
    BatchOutputCollector _collector;
    Object _id;
    Set<String> _followers = new HashSet<String>();
    
    @Override
    public void prepare(Map conf, TopologyContext context, BatchOutputCollector collector, Object id) {
        _collector = collector;
        _id = id;
    }

    @Override
    public void execute(Tuple tuple) {
        _followers.add(tuple.getString(1));
    }
    
    @Override
    public void finishBatch() {
        _collector.emit(new Values(_id, _followers.size()));
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("id", "partial-count"));
    }
}

PartialUniquer implements IBatchBolt by extending BaseBatchBolt. A batch bolt provides a first class API to processing a batch of tuples as a concrete unit. A new instance of the batch bolt is created for each request id, and Storm takes care of cleaning up the instances when appropriate.

When PartialUniquer receives a follower tuple in the execute method, it adds it to the set for the request id in an internal HashSet.

Batch bolts provide the finishBatch method which is called after all the tuples for this batch targeted at this task have been processed. In the callback, PartialUniquer emits a single tuple containing the unique count for its subset of follower ids.

Under the hood, CoordinatedBolt is used to detect when a given bolt has received all of the tuples for any given request id.CoordinatedBolt makes use of direct streams to manage this coordination.

The rest of the topology should be self-explanatory. As you can see, every single step of the reach computation is done in parallel, and defining the DRPC topology was extremely simple.

Non-linear DRPC topologies

LinearDRPCTopologyBuilder only handles "linear" DRPC topologies, where the computation is expressed as a sequence of steps (like reach). It's not hard to imagine functions that would require a more complicated topology with branching and merging of the bolts. For now, to do this you'll need to drop down into using CoordinatedBolt directly. Be sure to talk about your use case for non-linear DRPC topologies on the mailing list to inform the construction of more general abstractions for DRPC topologies.

How LinearDRPCTopologyBuilder works

  • DRPCSpout emits [args, return-info]. return-info is the host and port of the DRPC server as well as the id generated by the DRPC server
  • constructs a topology comprising of:
    • DRPCSpout
    • PrepareRequest (generates a request id and creates a stream for the return info and a stream for the args)
    • CoordinatedBolt wrappers and direct groupings
    • JoinResult (joins the result with the return info)
    • ReturnResult (connects to the DRPC server and returns the result)
  • LinearDRPCTopologyBuilder is a good example of a higher level abstraction built on top of Storm's primitives

Advanced

  • KeyedFairBolt for weaving the processing of multiple requests at the same time
  • How to use CoordinatedBolt directly

Distributed RPC

The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls.

DRPC is not so much a feature of Storm as it is a pattern expressed from Storm's primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it's so useful that it's bundled with Storm.

High level overview

Distributed RPC is coordinated by a "DRPC server" (Storm comes packaged with an implementation of this). The DRPC server coordinates receiving an RPC request, sending the request to the Storm topology, receiving the results from the Storm topology, and sending the results back to the waiting client. From a client's perspective, a distributed RPC call looks just like a regular RPC call. For example, here's how a client would compute the results for the "reach" function with the argument "http://twitter.com":

DRPCClient client = new DRPCClient("drpc-host", 3772);
String result = client.execute("reach", "http://twitter.com");

The distributed RPC workflow looks like this:

Tasks in a topology

A client sends the DRPC server the name of the function to execute and the arguments to that function. The topology implementing that function uses a DRPCSpout to receive a function invocation stream from the DRPC server. Each function invocation is tagged with a unique id by the DRPC server. The topology then computes the result and at the end of the topology a bolt called ReturnResults connects to the DRPC server and gives it the result for the function invocation id. The DRPC server then uses the id to match up that result with which client is waiting, unblocks the waiting client, and sends it the result.

LinearDRPCTopologyBuilder

Storm comes with a topology builder called LinearDRPCTopologyBuilder that automates almost all the steps involved for doing DRPC. These include:

  1. Setting up the spout
  2. Returning the results to the DRPC server
  3. Providing functionality to bolts for doing finite aggregations over groups of tuples

Let's look at a simple example. Here's the implementation of a DRPC topology that returns its input argument with a "!" appended:

public static class ExclaimBolt extends BaseBasicBolt {
    public void execute(Tuple tuple, BasicOutputCollector collector) {
        String input = tuple.getString(1);
        collector.emit(new Values(tuple.getValue(0), input + "!"));
    }

    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("id", "result"));
    }
}

public static void main(String[] args) throws Exception {
    LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("exclamation");
    builder.addBolt(new ExclaimBolt(), 3);
    // ...
}

As you can see, there's very little to it. When creating the LinearDRPCTopologyBuilder, you tell it the name of the DRPC function for the topology. A single DRPC server can coordinate many functions, and the function name distinguishes the functions from one another. The first bolt you declare will take in as input 2-tuples, where the first field is the request id and the second field is the arguments for that request.LinearDRPCTopologyBuilder expects the last bolt to emit an output stream containing 2-tuples of the form [id, result]. Finally, all intermediate tuples must contain the request id as the first field.

In this example, ExclaimBolt simply appends a "!" to the second field of the tuple. LinearDRPCTopologyBuilder handles the rest of the coordination of connecting to the DRPC server and sending results back.

Local mode DRPC

DRPC can be run in local mode. Here's how to run the above example in local mode:

LocalDRPC drpc = new LocalDRPC();
LocalCluster cluster = new LocalCluster();

cluster.submitTopology("drpc-demo", conf, builder.createLocalTopology(drpc));

System.out.println("Results for 'hello':" + drpc.execute("exclamation", "hello"));

cluster.shutdown();
drpc.shutdown();

First you create a LocalDRPC object. This object simulates a DRPC server in process, just like how LocalCluster simulates a Storm cluster in process. Then you create the LocalCluster to run the topology in local mode. LinearDRPCTopologyBuilder has separate methods for creating local topologies and remote topologies. In local mode the LocalDRPC object does not bind to any ports so the topology needs to know about the object to communicate with it. This is why createLocalTopology takes in the LocalDRPC object as input.

After launching the topology, you can do DRPC invocations using the execute method on LocalDRPC.

Remote mode DRPC

Using DRPC on an actual cluster is also straightforward. There's three steps:

  1. Launch DRPC server(s)
  2. Configure the locations of the DRPC servers
  3. Submit DRPC topologies to Storm cluster

Launching a DRPC server can be done with the storm script and is just like launching Nimbus or the UI:

bin/storm drpc

Next, you need to configure your Storm cluster to know the locations of the DRPC server(s). This is how DRPCSpout knows from where to read function invocations. This can be done through the storm.yaml file or the topology configurations. Configuring this through thestorm.yaml looks something like this:

drpc.servers:
  - "drpc1.foo.com"
  - "drpc2.foo.com"

Finally, you launch DRPC topologies using StormSubmitter just like you launch any other topology. To run the above example in remote mode, you do something like this:

StormSubmitter.submitTopology("exclamation-drpc", conf, builder.createRemoteTopology());

createRemoteTopology is used to create topologies suitable for Storm clusters.

A more complex example

The exclamation DRPC example was a toy example for illustrating the concepts of DRPC. Let's look at a more complex example which really needs the parallelism a Storm cluster provides for computing the DRPC function. The example we'll look at is computing the reach of a URL on Twitter.

The reach of a URL is the number of unique people exposed to a URL on Twitter. To compute reach, you need to:

  1. Get all the people who tweeted the URL
  2. Get all the followers of all those people
  3. Unique the set of followers
  4. Count the unique set of followers

A single reach computation can involve thousands of database calls and tens of millions of follower records during the computation. It's a really, really intense computation. As you're about to see, implementing this function on top of Storm is dead simple. On a single machine, reach can take minutes to compute; on a Storm cluster, you can compute reach for even the hardest URLs in a couple seconds.

A sample reach topology is defined in storm-starter here. Here's how you define the reach topology:

LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("reach");
builder.addBolt(new GetTweeters(), 3);
builder.addBolt(new GetFollowers(), 12)
        .shuffleGrouping();
builder.addBolt(new PartialUniquer(), 6)
        .fieldsGrouping(new Fields("id", "follower"));
builder.addBolt(new CountAggregator(), 2)
        .fieldsGrouping(new Fields("id"));

The topology executes as four steps:

  1. GetTweeters gets the users who tweeted the URL. It transforms an input stream of [id, url] into an output stream of[id, tweeter]. Each url tuple will map to many tweeter tuples.
  2. GetFollowers gets the followers for the tweeters. It transforms an input stream of [id, tweeter] into an output stream of[id, follower]. Across all the tasks, there may of course be duplication of follower tuples when someone follows multiple people who tweeted the same URL.
  3. PartialUniquer groups the followers stream by the follower id. This has the effect of the same follower going to the same task. So each task of PartialUniquer will receive mutually independent sets of followers. Once PartialUniquer receives all the follower tuples directed at it for the request id, it emits the unique count of its subset of followers.
  4. Finally, CountAggregator receives the partial counts from each of the PartialUniquer tasks and sums them up to complete the reach computation.

Let's take a look at the PartialUniquer bolt:

public class PartialUniquer extends BaseBatchBolt {
    BatchOutputCollector _collector;
    Object _id;
    Set<String> _followers = new HashSet<String>();
    
    @Override
    public void prepare(Map conf, TopologyContext context, BatchOutputCollector collector, Object id) {
        _collector = collector;
        _id = id;
    }

    @Override
    public void execute(Tuple tuple) {
        _followers.add(tuple.getString(1));
    }
    
    @Override
    public void finishBatch() {
        _collector.emit(new Values(_id, _followers.size()));
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("id", "partial-count"));
    }
}

PartialUniquer implements IBatchBolt by extending BaseBatchBolt. A batch bolt provides a first class API to processing a batch of tuples as a concrete unit. A new instance of the batch bolt is created for each request id, and Storm takes care of cleaning up the instances when appropriate.

When PartialUniquer receives a follower tuple in the execute method, it adds it to the set for the request id in an internal HashSet.

Batch bolts provide the finishBatch method which is called after all the tuples for this batch targeted at this task have been processed. In the callback, PartialUniquer emits a single tuple containing the unique count for its subset of follower ids.

Under the hood, CoordinatedBolt is used to detect when a given bolt has received all of the tuples for any given request id.CoordinatedBolt makes use of direct streams to manage this coordination.

The rest of the topology should be self-explanatory. As you can see, every single step of the reach computation is done in parallel, and defining the DRPC topology was extremely simple.

Non-linear DRPC topologies

LinearDRPCTopologyBuilder only handles "linear" DRPC topologies, where the computation is expressed as a sequence of steps (like reach). It's not hard to imagine functions that would require a more complicated topology with branching and merging of the bolts. For now, to do this you'll need to drop down into using CoordinatedBolt directly. Be sure to talk about your use case for non-linear DRPC topologies on the mailing list to inform the construction of more general abstractions for DRPC topologies.

How LinearDRPCTopologyBuilder works

  • DRPCSpout emits [args, return-info]. return-info is the host and port of the DRPC server as well as the id generated by the DRPC server
  • constructs a topology comprising of:
    • DRPCSpout
    • PrepareRequest (generates a request id and creates a stream for the return info and a stream for the args)
    • CoordinatedBolt wrappers and direct groupings
    • JoinResult (joins the result with the return info)
    • ReturnResult (connects to the DRPC server and returns the result)
  • LinearDRPCTopologyBuilder is a good example of a higher level abstraction built on top of Storm's primitives

Advanced

  • KeyedFairBolt for weaving the processing of multiple requests at the same time
  • How to use CoordinatedBolt directly

         come from github wiki 

分享到:
评论

相关推荐

    dapper paper from goodle

    dapper: a design to trace distributed rpc call from google

    The Art of Distributed Applications -- SUN RPC

    The Art of Distributed Applications -- Programming Techniques for Remote Procedure Calls 1991 by Sun Microsystems, Inc RPC

    Storm常见模式

    分布式RPC(distributedRPC,DRPC)用于对Storm上大量的函数调用进行并行计算过程。对于每一次函数调用,Storm集群上运行的拓扑接收调用函数的参数信息作为输入流,并将计算结果作为输出流发射出去。DRPC本身算不上...

    RPC.rar_distributed system

    DISTRIBUTED AND COMPUTING SYSTEM for unix

    Laravel开发-distributed-transaction

    Laravel开发-distributed-transaction 基于rabbitmq的分布式事务解决方案,支持rpc和最终一致性事务。

    rpc-websockets:在WebSockets上针对Node.js和JavaScriptTypeScript实现JSON-RPC 2.0

    快速开始在您的项目中安装我们的OSS库: npm install rpc-websockets使用rpc-websockets编写源代码: var WebSocket = require ( 'rpc-websockets' ) . Clientvar WebSocketServer = require ( 'rpc

    分布式系统完整讲义资料

    RPC  2. RMI  3. MOM 了解stream模型需要解决的问题。 第三章 Distributed Computing Paradigm 四章 Processes 第五章 Naming 第六章 Synchronization 第七章 Consistency and Replication 第八章 Fault ...

    joyrpc:高性能,高扩展性Java rpc框架

    JOYRPCJOYRPC是一款基于 Java 实现的 RPC 服务框架,是在总结内部服务框架经验的基础上,完全重新设计、支持全异步、微内核和插件化。主要特性微内核: 全插件化的RPC框架,我们只是补充了默认实现,所有的核心模块都...

    iit-distributed-system:using使用Java和RPC的分布式系统实验室

    分布式系统使用 Java gRPC 专家 zookeeper-集中式服务器,用于服务的分布式协调 ... gRPC 生成服务器存根 我们生成的服务器存根是grpc代码,我们需要grpc库才能工作。 我们需要使用maven导入gRPC的依赖包。...

    SOFAArk Project Java轻量级类隔离框架.rar

    2023”,一共为大家准备了五个任务,涵盖 SOFARPC、SOFAArk、SOFAJRaft 和 Layotto 等核心项目,涉及 Golang、Java、Kubernetes、Cloud Native、Distributed System 等多个领域。 SOFARPC 项目介绍 SOFARPC 是由...

    the programmer's guide to apache thrift

    Apache Thrift is an open source cross language serialization and RPC framework. With support for over 15 programming languages, Apache Thrift can play an important role in a range of distributed ...

    gRPC的Go语言实现。 基于HTTP / 2的RPC-Golang开发

    gRPC-Go gRPC的Go实现:一种高性能,开源,通用的RPC框架,该框架将移动和HTTP / 2放在首位。 有关更多信息,请参见gRPC快速入门指南。 安装要安装此pac gRPC-Go,gRPC的Go实现:一种高性能,开源,通用的RPC框架,...

    bloomfilter:用Golang编写的Bloomfilter,包括旋转和RPC

    布隆过滤器是一种节省空间的概率数据结构,用于确定元素是否属于集合。... 该解决方案从优化的本地实现开始,该实现增加了轮换,RPC协调和通用拒绝器。 这些软件包是: bitset :基本集的位集的实现

    easyway-dubbo:阿里巴巴 rpc dubbo 框架

    Dubbo is a distributed service framework enpowers applications with service import/export capability with high performance RPC. It's composed of three kernel parts: Remoting: a network communication ...

    开源项目-smallnest-rpcx.zip

    开源项目-smallnest-rpcx.zip,rpcs: a distributed, plugable RPC framework with servic governance

Global site tag (gtag.js) - Google Analytics