Distributed Cloud Computing and Distributed Parallel Computing: A Review 分布式云盘算与分布式并行盘算研究综述
Abstract: 抽象:
In this paper, we present a discussion panel of two of the hottest topics in this area namely distributed parallel processing and distributed cloud computing. Various aspects have been discussed in this review paper such as concentrating on whether these topics are discussed simultaneously in any previous works. Other aspects that have been reviewed in this paper include the algorithms, which simulated in both distributed parallel computing and distributed cloud computing. The goal is to process the tasks over resources then readjusted the calculation among the servers for the sake of optimization. These help us to improve the system performance with the desired rates. During our review, we presented some articles which explain the designing of applications in distributed cloud computing while some others introduced the concept of decreasing the response time in distributed parallel computing.
在本文中,我们提出了一个讨论小组,讨论该领域最热门的两个话题,即分布式并行处置惩罚和分布式云盘算。这篇综述论文讨论了各个方面,比方会合讨论这些主题是否在以前的任何作品中同时讨论。本文回顾的其他方面包罗在分布式并行盘算和分布式云盘算中模拟的算法。目标是处置惩罚资源上的任务,然后重新调整服务器之间的盘算以举行优化。这些有助于我们以所需的速率进步系统性能。在我们的审查中,我们介绍了一些解释分布式云盘算中应用步伐设计的文章,而另一些文章则介绍了减少分布式并行盘算中响应时间的概念。
Published in: 2018 International Conference on Advanced Science and Engineering (ICOASE)
发表于: 2018 International Conference on Advanced Science and Engineering (ICOASE)
Date of Conference: 09-11 October 2018
会议日期:2018年10月9-11日
Date Added to IEEE Xplore: 29 November 2018
添加到IEEE Xplore的日期:2018年11月29日
ISBN Information: ISBN信息:
DOI: 10.1109/ICOASE.2018.8548937
DOI: 10.1109/ICOASE.2018.8548937
Publisher: IEEE 发布者: IEEE
Conference Location: Duhok, Iraq
会议地点:伊拉克杜胡克
SECTION I. 第一节 总则Introduction 介绍
A recently growing technology, that is Cloud computing, entailing scheduling mechanism as a vital part, allows for the processing of huge amounts of data [1].
最近发展的一项技术,即云盘算,必要将调度机制作为重要部分,可以处置惩罚大量数据 [1] 。
One field where we increasingly see cloud computing is high energy physics (HEP). In this study, we explore the reasons why they are integrated in the applications of HEP and how it is gradually becoming more common [2].
我们越来越多地看到云盘算的一个领域是高能物理(HEP)。在这项研究中,我们探讨了它们被整合到HEP应用中的原因,以及它怎样逐渐变得越来越普遍 [2] 。
Cloud computing is a broad term generally referring to hosted services. It is the virtualization of physical hardware where data is organized in specified centres. Yet as a new technology with several merits, it is not without its issues, a major challenge is load balancing [3].
云盘算是一个广义的术语,通常是指托管服务。它是物理硬件的虚拟化,其中数据在指定的中央举行构造。然而,作为一项具有多项长处的新技术,它并非没有问题,一个主要的挑战是 负载平衡 [3] .
This article proposes the implementation of robot SLAM architecture in order to fulfil the Real-time requirement of practical robot systems, which is essential. The robot SLAM adopts two paralleled threads processing in order to fulfil this role. The computational complexity is dominantly determined by the particle number employed, two distributed threads of variable particle sizes are simultaneously executed [4].
本文提出了机器人SLAM架构的实现,以满足现实机器人系统的及时性要求,这是必不可少的。机器人SLAM接纳两个并行线程处置惩罚来完成这一脚色。盘算复杂度主要由所接纳的颗粒数决定,同时执行 [4] 两个差别颗粒巨细的分布式线程。
In cloud computing services, Virtual clusters allows for the allocation of visualized resources energetically. User management and virtual storage is one of the commonest applications of cloud environment by IT and business companies. In cloud services, the input position is job organization, and its arrangement is vital for the competence of the whole cloud services. The range of appropriate property for job implementation is the mechanism for job implementation [5].
在云盘算服务中,虚拟集群答应对可视化资源举行能量分配。用户管理和虚拟存储是IT和贸易公司最常见的云情况应用之一。在云服务中,输入位置是工作构造,其安排对于整个云服务的能力至关重要。作业实现的适当属性范围是作业实现 [5] 的机制。
Large scale industrial processing of big data entails monitoring and modelling issues, and in order to deal with this, the approach of distributed and parallel designed principal component analysis is proposed. The large-scale process is decomposed initially into distributed blocks based on a priori process knowledge; this solves the problem of high dimensional process variables [6].
大数据的大规模工业化处置惩罚必要监测和建模问题,为了解决这个问题,提出了分布式和并行设计的主成分分析方法。大规模过程最初基于先验过程知识分解为分布式块;这就解决了高维过程变量 [6] 的问题。
Some nodes can be lightly loaded while others heavily loaded in a cloud system [7], resulting in a poor performance. In cloud environment, distributing the load between the nodes is the function of load balancing, which is the spotlight of problems in cloud computing [8].
在云系统中 [7] ,某些节点可以轻负载,而其他节点则负载过重,从而导致性能不佳。在云情况中,在节点之间分配负载是负载均衡的功能,这是云盘算 [8] 中问题的焦点。
Even load distribution in the cloud system leads to a better resource utilization and is much desired [7].
云系统中的均匀负载分布导致更好的资源利用率,并且是非常必要 [7] 的。
In order to balance the total load on the system in attempt to ensure a good overall performance relative to some specific metric of system performance, a load balancing algorithm [9] [10] [11] transparently transfers the workload from heavily loaded nodes to lightly loaded nodes. The response time of the processes is what determines performance when the metric is involved. Yet the metric is total system throughput when performance is considered from the resource point of view [12]. Is the function of throughput to ensure fair treatment of all the users and that they make progress [9] [10] [12] [11] [8] [7], this in contrast to response time [10].
为了平衡系统上的总负载,以确保相对于系统性能的某些特定指标具有良好的整体性能,负载平衡算法 [9] [10] [11] 透明地将工作负载从负载较重的节点转移到负载较轻的节点。当涉及指标时,进程的响应时间决定了性能。然而,当从资源角度 [12] 思量性能时,该指标是总系统吞吐量。是吞吐量的功能,以确保公平对待全部用户并取得进展 [9] [10] [12] [11] [8] [7] ,这与响应时间 [10] 形成鲜明对比。
SECTION II. 第二节.Distributed Cloud Computing 分布式云盘算
Scientific applications utilizing clouds is seemingly a rising interest among researchers, at the same time many large corporations are contemplating switching over to hybrid clouds. Parallel processing is needed for effective execution of jobs by complex applications. In parallel process the presence of communication and synchronization allows for more effective use of CPU resources. Thus overall, maintaining the level of responsiveness of parallel jobs while achieving the effective utilization of nodes is mandatory for a data center [1].
利用云的科学应用好像越来越受到研究人员的兴趣,与此同时,很多大公司正在思量切换到混淆云。复杂应用步伐必要并行处置惩罚才气有用执行作业。在并行进程中,通信和同步的存在答应更有用地利用 CPU 资源。因此,总体而言,在实现节点有用利用的同时保持并行作业的响应水平对于数据中央 [1] 来说是强制性的。
Through cloud computing, a client can request information, shared resources, software and other services at any time, according to his specifications. It is an on-demand service, and the term is commonly seen across the internet. You can view the whole internet as a cloud. Not to mention, utilizing cloud decreases the capital and operational costs. However, a major challenge in cloud computing is load balancing, a distributed solution to this issue is always required. Because of the complexity of cloud and widespread distribution of its component, it is difficult to have efficient load balancing by assigning jobs to appropriate servers and clients individually, and it is not cost effective or practical fulfilling the required demands by maintaining one or more idle services. While jobs are assigned, some uncertainty is attached [3].
通过云盘算,客户可以根据本身的要求随时哀求信息、共享资源、软件和其他服务。它是一种按需服务,该术语在互联网上很常见。您可以将整个互联网视为云。更不消说,利用云可以低落资源和运营本钱。然而,云盘算的一个主要挑战是负载均衡,这个问题总是必要分布式解决方案。由于云的复杂性及其组件的广泛分布,很难通过单独将作业分配给适当的服务器和客户端来实现有用的负载均衡,并且通过维护一个或多个空闲服务来满足所需的需求既不经济高效也不切现实。在分配工作时,存在一些不确定性 [3] 。
A protocol is thus proposed and designed, the purpose of which is enhancing server throughput and performance, enhancing resource utilization and switching time. In this protocol, it is in the cloud where the jobs are scheduled and within the existing protocols the drawbacks are solved. In order to minimize the waiting and switching time, the job that offers better performance to the computer are given priority. Solving drawbacks of existing protocols by managing the scheduling of job has taken considerable effort, along with improvising the throughput and efficiency of the server [1].
因此,提出并设计了一种协议,其目的是进步服务器吞吐量和性能,进步资源利用率和切换时间。在此协议中,在云中安排作业,并且在现有协议中解决了缺点。为了最大程度地减少等候和切换时间,优先思量为盘算机提供更好性能的作业。通过管理作业的调度来解决现有协议的缺点必要付出相当大的努力,同时进步服务器 [1] 的吞吐量和效率。
Scientific applications utilizing systems based on cloud computing allows for high throughput computing (HTC). Applications in particle physics have systems with a unified infrastructure that utilize a number of distinct IaaS clouds. There are a number of criteria that the system of our cloud computing is based on. The applications of embarrassingly parallel single HEP need to run in a batch environment in the system. No inter-node or inter-process communications are required; However, the memory footprint that sharing process memory produces are reduced by using multi-process jobs [2].
利用基于云盘算的系统的科学应用可实现高吞吐量盘算 (HTC)。粒子物理学中的应用步伐具有具有同一底子架构的系统,这些底子架构利用了很多差别的 IaaS 云。我们的云盘算系统基于很多标准。尴尬的并行单个 HEP 的应用步伐必要在系统中的批处置惩罚情况中运行。不必要节点间或进程间通信;但是,利用多进程作业 [2] 可以减少共享进程内存产生的内存占用量。
Topology island formation and fast network topology processing is realized by CIM parallel topological processing, which the power network is based on and is discussed in this paper. High throughput, high reliability and high availability storages for cloud storage platform are designed by them. Also, the design idea of MySQL-CIM model was introduced by them as well and efficient tube MySQL-CIM model in data is realized by Ogma development. The development of power network topology processing (NTP) application is done [13].
本文基于CIM并行拓扑处置惩罚实现了电力网络的拓扑岛形成和快速网络拓扑处置惩罚,并举行了讨论。他们为云存储平台设计了高吞吐量、高可靠性和高可用性的存储。别的,他们还引入了MySQL-CIM模子的设计思想,并通过Ogma开发实现了高效的数据管MySQL-CIM模子。电力网络拓扑处置惩罚(NTP)应用开发完成 [13] 。
Cloud computing is a broad term generally referring to hosted services. It is the virtualization of physical hardware where data is organized in specified centres. Yet as a new technology with several merits, it is not without its issues, a major challenge is load balancing, and an important topic in cloud computing requiring many researches and studies. Since many systems are involved in the structure of data centre, it is difficult to perform load balancing, especially in case of cloud computing. Majority of researches on the subject were have been done in distributed environment, yet using semi-distributed load balancing has been the target of so little research. Load balancing in semi-distributed way would allow the design of new algorithm for cloud computing [3].
云盘算是一个广义的术语,通常是指托管服务。它是物理硬件的虚拟化,其中数据在指定的中央举行构造。然而,作为一项具有多种长处的新技术,它并非没有问题,一个主要的挑战是负载均衡,也是云盘算中必要大量研究和研究的重要课题。由于数据中央的结构涉及很多系统,因此很难举行负载均衡,尤其是在云盘算的情况下。关于该主题的大多数研究都是在分布式情况中完成的,但利用半分布式负载均衡的研究对象很少。半分布式方式的负载均衡将答应设计新的云盘算 [3] 算法。
The merits of decentralized computing away from data centres, along with the consideration of using infrastructure obtained from multiple providers and changing the infrastructure of cloud has been discussed in this paper. A new architecture for computing is demanded by these new trends and need to be fulfilled by cloud infrastructure in the future. Self-learning systems, service space, data intensive computing and people – device connections are areas expected to be most influenced by these new architectures. Finally, in order to realize the next generation cloud system’s potential, a roadmap of the challenges that need to be considered has been proposed. [14]
本文讨论了阔别数据中央的去中央化盘算的长处,以及利用从多个提供商那边获得的底子办法和改变云底子办法的思量。这些新趋势必要一种新的盘算架构,并且必要在未来通过云底子办法来实现。自学习系统、服务空间、数据密集型盘算和人员-设备毗连是受这些新架构影响最大的领域。末了,为了实现下一代云系统的潜力,提出了必要思量的挑战门路图。 [14]
Creating ad hoc clouds and harnessing computing for online applications and mobile both at the edge of the network, using computing models based on voluntarily providing resources has been discussed in this paper. The idea of paying for a cloud VM even if the server executing on the VM is idle is a traditional notion, and we have presented a computing model in order to replace it. An uprising computing model of cloud has been mentioned in this paper that integrates resilience and is software defined. Some areas will be influenced by newly forming computing architecture and changing cloud infrastructure. The internet-of-Things paradigm will be eased further enhancing the connectivity between people and devices, and new architectures will play a vital role. The volume of data provides a challenge in data intensive computing and novel techniques are needed to address this. There will be rising interest in new services such as acceleration, containers and function. Self-learning system will be realized when there will be convergence of search areas with cloud systems. The academia and the industry are leading forces in these changes, yet many challenges need to be solved in the future. Development of next generation cloud computing with sustainable systems and efficient management, expressing applications and improving security have been discussed in this paper, along with their approach and direction [14].
本文讨论了在网络边缘创建临时云并利用在线应用和移动盘算,利用基于自愿提供资源的盘算模子。即使虚拟机上执行的服务器处于空闲状态,也要为云虚拟机付费的想法是一个传统概念,我们提出了一个盘算模子来取代它。本文提到了一种新兴的云盘算模子,该模子集成了弹性并由软件界说。一些领域将受到新形成的盘算架构和不断变化的云底子办法的影响。物联网范式将得到缓解,进一步增能人与设备之间的毗连性,新的架构将发挥至关重要的作用。数据量给数据密集型盘算带来了挑战,必要新的技术来解决这个问题。人们对加速、容器和功能等新服务的兴趣将越来越大。当搜刮区域与云系统融合时,将实现自学习系统。学术界和工业界是这些变化的主导气力,但未来仍有很多挑战必要解决。本文讨论了具有可连续系统和高效管理的下一代云盘算的开发,表达了应用步伐和进步安全性,以及它们的方法和方向 [14] 。
Grid computing distributed computing, and parallel computing developments are part of cloud computing. Firstly, two traditional parallel programming models are introduced in this paper, along with expounding the concept of cloud computing. Secondly, it analyses and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce, respectively. Finally, it discusses and compares MPI, OpenMP models and Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing. [15]
网格盘算、分布式盘算和并行盘算的发展是云盘算的一部分。起首,介绍了两种传统的并行编程模子,并阐述了云盘算的概念;其次,分别分析和研究了OpenMP、MPI和Map Reduce的原理、优缺点。末了,从云盘算的角度对MPI、OpenMP模子和Map Reduce举行了讨论和比较。本文的研究结果旨在为并行盘算的发展提供参考。 [15]
Great emphasis has been made on distributed computing in this paper. The differences between distributed and parallel computing has been studied as well, along with terminologies, task allocation, performance parameters, the advantages and scope of distributed computing, as well as parallel distributed algorithm models [16].
本文重点介绍了分布式盘算。还研究了分布式盘算和并行盘算之间的差异,以及术语、任务分配、性能参数、分布式盘算的优势和范围,以及并行分布式算法模子 [16] 。
SECTION III. 第三节.Distributed Parallel Processing 分布式并行处置惩罚
Some nodes can be lightly loaded while others heavily loaded in a cloud system, resulting in a poor performance. In cloud environment, distributing the load between the nodes is the function of load balancing, which is the spotlight of problems in cloud computing [7].
在云系统中,某些节点大概负载较轻,而其他节点负载较重,导致性能较差。在云情况中,在节点之间分配负载是负载均衡的功能,这是云盘算 [7] 中问题的焦点。
The Quality of service needs to meet the standards regarding the job arrangement to make the virtual users happy. Decreasing source price, minimizing makespan and safeguarding error acceptance along with quality of service are used to improve resource allotment and job arrangement. Cloudsim toolkit has been used through the existing scheduling policies to evaluate the proposed algorithms. Early results of experiments have shown better results in terms of user response, execution time and cost, and time on different cloud workloads for the proposed framework compared to the already existing algorithms. Different virtual machine scheduling algorithms along with their performance that are based on various quality metrics has been discussed in this paper. The approach depends on CPU performance, network consumption, scheduling success grade, standard implementation time and so on.
服务质量必要符合有关工作安排的标准,以使虚拟用户满意。低落源代价、最小化制造跨度、保证错误继承度以及服务质量,用于改善资源分配和工作安排。Cloudsim工具包已通过现有的调度策略用于评估所提出的算法。与现有算法相比,早期实验结果体现,所提出的框架在用户响应、执行时间和本钱以及差别云工作负载上的时间方面取得了更好的结果。本文讨论了基于各种质量指标的差别虚拟机调度算法及其性能。该方法取决于 CPU 性能、网络消耗、调度成功等级、标准实现时间等。
We have shed some light on processing user request utilizing caching in the network itself. By making use of parallel processing a new catching strategy was introduced and its performance was heavily evaluated in terms of reducing redundant traffic and data access delay in different catching scenarios.
我们已经分析了怎样利用网络本身的缓存来处置惩罚用户哀求。通过利用并行处置惩罚,引入了一种新的捕获策略,并在减少差别捕获场景下的冗余流量和数据访问耽误方面临其性能举行了大量评估。
The cost of implementing the aforementioned catching network has also been studied. The improved performance by having decreased delay comes at the price of increased cost, and when implementing the proposed parallel processing, the trade-off needs to be carefully considered. The new strategy has shown marked effectiveness in the simulation results [17].
还研究了实施上述捕捞网络的本钱。通过减少耽误来进步性能是以增长本钱为代价的,在实现发起的并行处置惩罚时,必要仔细思量衡量。新策略在仿真结果 [17] 中体现出显著的有用性。
In order to improve the real time performance of PF based robot SLAM, an effective parallel implementation has been proposed in this paper. The acceleration of the overall SLAM algorithm can be increased by the discussed distributed parallel idea, because in the scenario when a keyfram laser can is grabbed, only a large number of particles is used. The results we have obtained from our experiments have verified that the temporal cost can be cut off effectively using this distributed architecture [4].
为了进步基于PF的机器人SLAM的及时性,该文提出了一种有用的并行实现方法。通过所讨论的分布式并行思想可以进步整个SLAM算法的加速度,因为在抓取keyfram激光器罐的场景中,只利用了大量的粒子。我们从实验中获得的结果已经验证了利用这种分布式架构 [4] 可以有用地切断时间本钱。
P. Srinivasa Rao et al. in [18] referred to the elements to take a balanced approach to effective, the information nodes of all other nodes. When a node receives a job, it has to query the status of the other buttons to find out which node has less usage to forward the work, and when all the nodes are query phenomenon overload happens. With the nodes broadcast a statement informed about its status will also cause huge load on the network. Next is the issue of time wasted at each node to perform query status. Besides, the current state of the network is also a factor affecting the performance load balancing. This is because, in a complex network, with multiple subnets, the network node configured to locate all the other node is a fairly complex task. Thus, querying the status of nodes in the cloud will affect the performance load balancing. [18]
P. Srinivasa Rao等人在 [18] 提到要采取平衡方法的要素时,对全部其他节点的信息节点有用。当一个节点接到一个作业时,它必须查询其他按钮的状态,找出哪个节点的用量较少来转发工作,当全部节点都出现查询时发生过载现象。随着节点的广播,关照其状态的声明也会对网络造成巨大的负载。接下来是在每个节点上执行查询状态所浪费的时间问题。别的,网络的当前状态也是影响性能负载均衡的一个因素。这是因为,在具有多个子网的复杂网络中,配置为定位全部其他节点的网络节点是一项相当复杂的任务。因此,查询云中节点的状态会影响性能负载均衡。 [18]
Reference [19] showed that factors like response time will greatly affect the performance load balancing on the cloud. Another study outlined two outstanding issues of the previous algorithm: i) load balancing occurs only when the server is overloaded; ii) continuous information retrieval resources available lead to increased computational cost and bandwidth consumption So the authors based on the response time of the request have proposed an algorithm to assign the required decisions for servers appropriately, the algorithm approach has reduced the query information on available resources, reduced communication and computation on each server.
参考 [19] 表明,响应时间等因素将极大地影响云上的性能负载均衡。另一项研究概述了先前算法的两个突出问题:i)负载平衡仅在服务器过载时发生;ii)连续的信息检索资源可用导致盘算本钱和带宽消耗增长 因此,作者基于哀求的响应时间提出了一种算法,可以适当地为服务器分配所需的决策,该算法方法减少了对可用资源的查询信息,减少了每个服务器上的通信和盘算。
According to the algorithm, Min – Min [20] minimises the time to complete the work in each network node; however, the algorithm has yet to consider the workload of each resource. Therefore, the authors proposed the algorithm of Load Balance Improved Min-Min (LBIMM) to overcome this weakness. Failure to consider the workload of each resource leading to a number of resources are overloaded, some resources are idle; therefore, the work done in each resource is a factor affecting the performance load balancing on the cloud. There is a simple traditional Min - Min Algorithm and the current scheduling algorithm in cloud computing is based upon it.
根据该算法,Min – Min [20] 最大限度地减少了在每个网络节点中完成工作的时间;但是,该算法尚未思量每个资源的工作量。因此,作者提出了负载均衡改进最小-最小值(LBIMM)算法来克服这一弱点。不思量每个资源的工作量导致一些资源超负荷,一些资源处于闲置状态;因此,在每个资源中完成的工作是影响云上性能负载均衡的一个因素。有一个简单的传统Min - Min算法,目前云盘算中的调度算法就是基于它。
Kapur in [21] proposed the algorithm of LBRS (Load Balanced Resource Scheduling Algorithm) to consider the importance of resource scheduling policies and load balancing for resources in cloud. The main goals are to maximise the CPU utilisation, maximise the throughput, minimise the response time, minimise the waiting time, minimise the turnaround time, minimise the resource cost and obey the Fairness Principle. In here, the parameters mentioned are QoS: throughput, response time, and waiting time. We have simulation and analysis of data on the impact of the parameters on cloud to perform load balancing. From there, we discovered that, the parameter of makespan (runtime) is of great significance for the data centre cloud. So, the task of the researchers is to study that the algorithms have effective load balancing to reduce the time makespan of virtual machines.
Kapur 提出了 [21] LBRS(Load Balanced Resource Scheduling Algorithm)算法,以思量资源调度策略和负载均衡对云中资源的重要性。主要目标是最大限度地进步CPU利用率,最大限度地进步吞吐量,最小化响应时间,最小化等候时间,最小化周转时间,最小化资源本钱,并遵守公平原则。在这里,提到的参数是 QoS:吞吐量、响应时间和等候时间。我们对参数对云的影响举行模拟和分析,以执行负载均衡。由此,我们发现makespan(runtime)的参数对于数据中央云具有重要意义。因此,研究人员的任务是研究这些算法是否具有有用的负载平衡,以减少虚拟机的时间跨度。
To achieve utilization of resources in an optimal way, the dynamic workload is distributed by the load balancing across multiple resources and this prevents single from being underutilized or overwhelmed; this however is a considerable optimization problem. A strategy for load balancing originating from Simulated Annealing (SA) has been proposed in this paper, and balancing the cloud infrastructure load is its primary function. A traditional Cloud Analyst simulator is modified and the effectiveness of the algorithm is measured. In comparison to existing approaches, like First Come First Serve (FCFS), local search algorithms i.e. Stochastic Hill climbing (SHC) and Round Robing (RR), the proposed algorithm has shown a better overall performance [22].
为了实现资源的最佳利用率,动态工作负载通过跨多个资源的负载均衡举行分配,从而防止单个资源未得到充实利用或不堪重负;然而,这是一个相当大的优化问题。该文提出了一种源自模拟退火(SA)的负载均衡策略,均衡云底子办法负载是其主要功能。对传统的Cloud Analyst模拟器举行了修改,并衡量了算法的有用性。与现有的方法相比,如先到先得(FCFS)、本地搜刮算法(即随机登山(SHC)和循环罗布(RR),所提出的算法体现出更好的整体性能 [22] 。
We tried to maximise the utilisation of resources to keep working resources available for tasks that are yet to come and also concentrate on the reliability of cloud services. We propose a new scheduling algorithm called Dabbawala cloud scheduling Algorithm based on Mumbai Dabbawala delivery system. In our proposed system, the tasks are grouped according to its cost required to complete in a Cluster and its VM resources. We found the lowest cost cluster and its VM for each task requested and group it together for getting services as in the Hadoop Map Reduce model. We have two phases called mapping the tasks and reduce the mapped tasks. The algorithm consists of four Dabbawala for each task to get serviced. From this algorithm, some available scheduling algorithms are compared. We achieve considerable gain in time and resources utilisation. [23].
我们试图最大限度地利用资源,以保持工作资源可用于即将到来的任务,并专注于云服务的可靠性。我们提出了一种新的调度算法,称为基于孟买达巴瓦拉交付系统的达巴瓦拉云调度算法。在我们发起的系统中,任务根据在集群中完成所需的本钱及其 VM 资源举行分组。我们为每个哀求的任务找到了本钱最低的集群及其 VM,并将其组合在一起以获取服务,就像在 Hadoop Map Reduce 模子中一样。我们有两个阶段,分别称为映射任务和减少映射任务。该算法由四个 Dabbawala 构成,用于每个要处置惩罚的任务。通过该算法,比较了一些可用的调度算法。我们在时间和资源利用方面取得了可观的收益。 [23] 。
SECTION IV. 第四节.Existing Load Balancing Technique in Cloud 云中现有的负载均衡技术
A. VectorDot A. 矢量点
VectorDot is an innovative algorithm proposed by A. Singh et al. [18] for load balancing. It utilizes a flexible data centre with technologies of storage virtualization and integrated server, to handle the multidimensional resource loads that are distributed across network switches, servers and storages and the hierarchical complexity of the data centre. VectorDot helps in improving overloads on storage nodes, switches and servers, at the same time based on item requirements distinguishes nodes using dot product.
VectorDot 是 A. Singh 等人 [18] 提出的一种用于负载均衡的创新算法。它利用灵活的数据中央以及存储虚拟化和集成服务器技术,处置惩罚分布在网络交换机、服务器和存储上的多维资源负载以及数据中央的分层复杂性。VectorDot 有助于改善存储节点、交换机和服务器上的过载,同时根据项目要求利用点积区分节点。
B. LB of VM resources scheduling strategy
B. 虚拟机资源调度策略的LB
A scheduling strategy that utilizes the current state of the system and historical data for load balancing of VM resources, was proposed by J. Hu et al [9]. By implementing a genetic algorithm, this strategy reduces dynamic migration and accomplishes the best load balancing. It achieves a better resource utilization by dealing with the issues of high cost of migration and load imbalances.
J. 胡 等人提出了一种利用系统当前状态和汗青数据对虚拟机资源举行负载均衡的调度策略 [9] 。通过实施遗传算法,该策略可减少动态迁移并实现最佳负载均衡。它通过处置惩罚迁移本钱高和负载不平衡的问题,实现了更好的资源利用。
C. Task Scheduling Based on LB
C. 基于LB的任务调度
A mechanism to gain high resource utilization and satisfy the users’ dynamic requirements based on load balancing with a two-level task scheduling, is discussed by Y. Fang et al. [11]. It maps tasks to virtual machines, then host resources, accomplishing load balancing, resulting in a cloud computing environment with better resource utilization, improved task response time and an overall gain in performance.
Y. Fang 等人讨论了一种基于两级任务调度的负载均衡获得高资源利用率和满足用户动态需求的机制。 [11] 它将任务映射到虚拟机,然后托管资源,实现负载均衡,从而形成一个具有更好资源利用率、改进任务响应时间和整体性能提升的云盘算情况。
D. Active Clustering D. 主动聚类
Optimizing job assignments by using local re-wiring to connect similar services, was the self-aggregating technique for load balancing that M. Randles et al. [9] investigated. Using resources effectively in a high resource system lead to an increased throughput bettering the performance of the system. However, it is degraded with increase in system diversity.
通过利用本地重新布线来毗连类似的服务来优化作业分配,是 M. Randles 等人 [9] 研究的负载均衡自聚合技术。在高资源系统中有用利用资源可以进步吞吐量,从而进步系统性能。然而,随着系统多样性的增长,它逐渐退化。
E. Cloud Load Balancing Metrics
E. 云负载均衡指标
Various metrics considered in existing load balancing techniques in cloud computing are discussed below:
下面将讨论云盘算中现有负载均衡技术中思量的各种指标:
The number of the executed tasks is measured using throughput, and a high number indicates a good system performance.
执行的任务数是利用吞吐量来衡量的,数字越大体现系统性能良好。
When applying an algorithm for load balancing, the involvement of overhead is measured by overhead associates. The inter-process, inter-processor and task mobilization composes overhead and the more efficient a load balancing technique is, the less overhead is involved.
应用负载均衡算法时,开销的到场度由开销关联来衡量。进程间、处置惩罚器间和任务调动构成了开销,负载平衡技术越高效,涉及的开销就越少。
Load balancing needs to have a good technique for fault toleration. And it is the ability to do uniform load balancing by an algorithm, despite of link or arbitrary node failure.
负载平衡必要具有良好的容错技术。它能够通过算法举行同一的负载均衡,即使发生链路或恣意节点故障。
Good performance systems have minimized migration times, and it is the time needed for migration of resources or jobs between individual nods, the less the better
性能良好的系统最大限度地减少了迁移时间,并且是在各个节点之间迁移资源或作业所需的时间,越少越好
Response time is another parameter, which if minimized leads to a better system performance and it is the time needed for a particular load balancing algorithm to respond in a distributed system.
响应时间是另一个参数,如果将其最小化,则会导致更好的系统性能,并且是特定负载平衡算法在分布式系统中响应所需的时间。
Effective resource utilization is mandatory for an efficient load balancing and optimization should be done.
有用的资源利用率对于有用的负载均衡是必不可少的,并且应该举行优化。
Scalability of algorithm is determined by its ability to perform load balancing for any finite number of nodes in a system. Enhanced scalability is desired.
算法的可伸缩性取决于它对系统中恣意有限数量的节点执行负载均衡的能力。必要增强的可伸缩性。
The effectiveness of a system is measured by its performance, yet the cost effectiveness needs to considered and kept reasonable. An example would be keeping acceptable delays while decreasing task response times [24].
一个系统的有用性是以其性能来衡量的,但本钱效益必要思量并保持合理。一个例子是保持可继承的耽误,同时减少任务响应时间 [24] 。
SECTION V. 第五节Discussion 讨论
Distributed cloud computing is a new technology to interconnect data and applications served from different locations. In information technology, the term ‘distributed’ means that something is shared among multiple users or systems that are geographically different. As shown in Table I, we have some features that we can get from using distributed cloud computing, and each feature has an effect in using cloud technology.
分布式云盘算是一种新技术,用于将来自差别位置的数据和应用步伐互连起来。在信息技术中,术语“分布式”意味着在地理上差别的多个用户或系统之间共享某些内容。如图所示 Table I ,我们有一些功能可以从利用分布式云盘算中获得,每个功能在利用云技术时都有影响。
An important feature used in more than one resources is multi-process job feature because an important aim of cloud computing is processing multi jobs at the same time via more than one server if the servers are in different locations. When we have a large amount of data to process, we can divide the big amount of data into small pieces, and each part may be processed by a different server. The aim of this process is to decrease the CPU usage, minimise switching time, minimise waiting time for processing data, improve server throughput and improve performance of the communication and computing of data.
在多个资源中利用的一个重要功能是多进程作业功能,因为云盘算的一个重要目标是通过多个服务器同时处置惩罚多个作业(如果服务器位于差别的位置)。当我们有大量的数据必要处置惩罚时,我们可以将大量的数据分成小块,每个部分大概由差别的服务器处置惩罚。此过程的目的是低落 CPU 利用率、最大限度地减少切换时间、最大限度地减少处置惩罚数据的等候时间、进步服务器吞吐量以及进步数据通信和盘算的性能。
Another feature is designing an application for cloud structure that helps users to easily use any cloud application for make communication with different users in different locations. One more feature is reducing the usage of memory because one problem in the last is using memory, and now after using cloud, user can use memory as they need by contacting with cloud application manage for expanding the memory in this time users only use cloud memory and reducing his storage.
另一个功能是设计一个云结构的应用步伐,帮助用户轻松利用任何云应用步伐与差别位置的差别用户举行通信。另一个功能是减少内存的利用,因为末了一个问题是利用内存,如今利用云后,用户可以通过接洽云应用步伐管理来根据必要利用内存,此时用户只利用云内存并减少他的存储。
The last important feature is improving server performance because the performance of communication is important. After reviewing all the references mentioned in Table I, we decided that the reference [1] is best works on distributed cloud computing because they cover great number of features that we discussed in above details.
末了一个重要功能是进步服务器性能,因为通信性能很重要。在查看了 Table I 中提到的全部参考文献后,我们认为该参考 [1] 文献最适合分布式云盘算,因为它们涵盖了我们在上面详细讨论的大量功能。
Improving performance of digital computers and its other attributes (like cost effectiveness, reliability and so on) by means of various forms of concurrency is the concern of parallel processing, and achieves this through various algorithmic and architectural methods. There are three types of parallel processing approaches: distributed, shared and hybrid memory systems. In this review, we focused on distributed parallel processing and determined some important features as shown in Table II.
通过各种形式的并发性进步数字盘算机的性能及其其他属性(如本钱效益、可靠性等)是并行处置惩罚的关注点,并通过各种算法和架构方法实现这一目标。并行处置惩罚方法有三种范例:分布式、共享和混淆内存系统。在这篇综述中,我们重点关注分布式并行处置惩罚,并确定了一些重要的特性,如 Table II 所示。
TABLE I. Distributed Cloud Computing Summary
表 I. 分布式云盘算摘要
Table I.-
Distributed Cloud Computing Summary
The important feature mentioned in more than one resource is improving performance using load balancing technique. By using load balancing among servers, we can distribute the process and make balance between servers for processing the jobs and improve performance of our distributed system. Another feature is minimising resource cost because when we divide the load among servers, we can minimise the resource cost such as CPU, memory and storage. All of references implement here idea by proposing an algorithm for using distributed parallel processing based on response time of responding user requests because when the system have minimum response time it is better to user for using that system for responding user requests.
多个资源中提到的重要功能是利用负载平衡技术进步性能。通过在服务器之间利用负载平衡,我们可以分配进程并在服务器之间举行平衡以处置惩罚作业并进步分布式系统的性能。另一个特点是最小化资源本钱,因为当我们在服务器之间分配负载时,我们可以最小化资源本钱,比方 CPU、内存和存储。全部参考文献都通过提出一种基于响应用户哀求的响应时间利用分布式并行处置惩罚的算法来实现这里的想法,因为当系统具有最短的响应时间时,用户最好利用该系统来响应用户哀求。
After reviewing the references in this paper, we decided that reference [21] is better because it covers a great number of features including load balancing for improving system performance and minimising both response time and resource cost.
在查看了本文中的参考文献后,我们认为参考 [21] 文献更好,因为它涵盖了很多功能,包罗用于进步系统性能的负载平衡,以及最大限度地减少响应时间和资源本钱。
Table II. Distributed parallel processing Summary
表二.分布式并行处置惩罚总结
Table II.-
Distributed parallel processing Summary
SECTION VI. 第六节.Conclusion 结论
This review paper has covered many ideas in distributed cloud computing and distributed parallel computing. Subjects such as the combination of both algorithms have been devoted in this review paper. The main goal of this paper relates to the process of distributing workloads over servers and then process them among master and slave’s nodes. However, articles have been discussed in this paper include the methodology of designing applications in distributed cloud computing and introducing a concept of optimizing the level of response times while executing user’s images.
这篇综述涵盖了分布式云盘算和分布式并行盘算方面的很多思想。这篇综述论文专门讨论了两种算法的结合等主题。本文的主要目标涉及在服务器上分配工作负载,然后在主节点和从节点之间处置惩罚它们的过程。然而,本文讨论的文章包罗在分布式云盘算中设计应用步伐的方法,以及引入在执行用户图像时优化响应时间水平的概念。
Authors
References
Download PDFs
Export
References & Cited By
参考文献和引用文献
Select All 全选
1.
L. Tripathy and R.R. Patra, “SCHEDULING IN CLOUD COMPUTING”, International Journal on Cloud Computing: Services and Architecture (IJCCSA), vol. 4, no. 5, October 2014.
L. Tripathy 和 R.R. Patra,“SCHEDULING IN CLOUD COMPUTING”,International Journal on Cloud Computing: Services and Architecture (IJCCSA),第 4 卷,第 5 期,2014 年 10 月。
Show in Context CrossRef Google Scholar
在上下文中体现 CrossRef Google 学术搜刮
2.
R. J. Sobie, “Distributed Cloud Computing in High Energy Physics”, DCC '14 Proceedings of the 2014 ACM SIGCOMM workshop on Distributed cloud computing, pp. 17-22, August 2014.
R. J. Sobie,“高能物理中的分布式云盘算”,DCC '14 Proceedings of the 2014 ACM SIGCOMM workshop on Distributed cloud computing,第 17-22 页,2014 年 8 月。
Show in Context CrossRef Google Scholar
在上下文中体现 CrossRef Google 学术搜刮
3.
P. A. Pawade and V. T. Gaikwad, “Semi-Distributed Cloud Computing System with Load Balancing Algorithm”, 2014.
P. A. Pawade 和 V. T. Gaikwad,“具有负载平衡算法的半分布式云盘算系统”,2014 年。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
4.
Xiuzhi Li, Songmin Jia, Ke Wang and Xiaolin Yin, “DISTRIBUTED PARALLEL PROCESSING OF MOBILE ROBOT PF-SLAM”, International Conference on Automatic Control and Artificial Intelligence (ACAI 2012) Xiamen China, April 2013.
Xiuzhi Li, Songmin Jia, Ke Wang and Xiaolin Yin,“移动机器人 PF-SLAM 的分布式并行处置惩罚”,主动控制与人工智能国际会议 (ACAI 2012),中国厦门,2013 年 4 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
A. K.Indira and B. M. K. Devi, “Effective Integrated Parallel Distributed Processing Approach in Optimized Multi-cloud computing Environment”, Sixth International Conference on Advanced Computing (lCoAC) pages 17-19 Chennai India, Dec. 2014.
A. K.Indira 和 B. M. K. Devi,“优化多云盘算情况中的有用集成并行分布式处置惩罚方法”,第六届高级盘算国际会议 (lCoAC),第 17-19 页,印度金奈,2014 年 12 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
J. Zhu, Z. Ge and Z. Song, “Distributed Parallel PCA for Modeling and Monitoring of Large-scale Plant-wide Processes with Big Data”, IEEE Transactions on Industrial Informatics, vol. 13, no. 4, pp. 1877-1885, Aug. 2017.
J. Zhu、Z. Ge 和 Z. Song,“Distributed Parallel PCA for Modeling and Monitoring of Large-scale Plant-wide Processes with Big Data”,IEEE Transactions on Industrial Informatics,第 13 卷,第 4 期,第 1877-1885 页,2017 年 8 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
A. Khiyati, M. Zbakh, H. El Bakkali and D. El Kettani, “Load Balancing Cloud Computing: State Of Art”, 2012 National Days of Network Security and Systems pages 20-21 Marrakech Morocco, April 2012.
A. Khiyati、M. Zbakh、H. El Bakkali 和 D. El Kettani,“Load Balancing Cloud Computing: State Of Art”,2012 年天下网络安全和系统日,第 20-21 页,摩洛哥马拉喀什,2012 年 4 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
M. A. Vouk, “Cloud Computing – Issues Research and Implementations”, ITI 2008 - 30th International Conference on Information Technology Interfaces pages 23-26 Dubrovnik Croatia, June 2008.
M. A. Vouk,“云盘算 - 问题研究和实施”,ITI 2008 - 第 30 届信息技术接口国际会议,第 23-26 页,克罗地亚杜布罗夫尼克,2008 年 6 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
C. Lin, H. Chin and D. Deng, “Dynamic Multiservice Load Balancing in Cloud-Based Multimedia System”, IEEE Systems Journal, vol. 8, pp. 225-234, March 2014.
C. Lin、H. Chin 和 D. 邓,“基于云的多媒体系统中的动态多服务负载平衡”,IEEE 系统杂志,第 8 卷,第 225-234 页,2014 年 3 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
10.
Y. Deng and Rynson W.H. Lau, “On Delay Adjustment for Dynamic Load Balancing in Distributed Virtual Environments”, IEEE transaction on visualization and computer graphics, vol. 18, no. 4, April 2012.
Y. 邓 和 Rynson W.H. Lau,“On Delay Adjustment for Dynamic Load Balancing in Distributed Virtual Environments”,IEEE transaction on visualization and computer graphics,第 18 卷,第 4 期,2012 年 4 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
11.
L.D. D. Babua and P. V. Krishna, “Honey bee behavior inspired load balancing of tasks in cloud computing environments” in Applied Soft Computing, Amsterdam, The Netherlands, vol. 13, no. 5, pp. 2292-2303, May 2013.
L.D. D. Babua 和 P. V. Krishna,“蜜蜂举动激发了云盘算情况中任务的负载均衡”,载于 Applied Soft Computing,荷兰阿姆斯特丹,第 13 卷,第 5 期,第 2292-2303 页,2013 年 5 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
D. Warneke and O. Kao, “Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud”, IEEE transaction on parallel and distributed systems, vol. 22, no. 6, JUNE 2011.
D. Warneke 和 O. Kao,“Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud”,IEEE Transaction on Parallel and Distributed Systems,第 22 卷,第 6 期,2011 年 6 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
13.
X. L. Xingong and X. Lv, “Distributed Cloud Storage and Parallel Topology Processing of Power Network”, Third International Conference on Trustworthy Systems and Their Applications pages 18-22 Wuhan China, Sept. 2016.
X. L. Xingong 和 X. Lv,“分布式云存储和电力网络的并行拓扑处置惩罚”,第三届可信系统及其应用国际会议,第 18-22 页,中国武汉,2016 年 9 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
14.
B. Varghese and R. Buyya, “Next Generation Cloud Computing: New Trends and Research Directions1”, Future Generation Computer Systems, September 2017.
B. Varghese 和 R. Buyya,“下一代云盘算:新趋势和研究方向1”,下一代盘算机系统,2017 年 9 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
15.
Z. Peng, Q. Gong, Y. Duan and Y. Wang, “The Research of the Parallel Computing Development from the Angle of Cloud Computing”, IOP Conf. Series: Journal of Physics: Conf., 2017.
Z. Peng, Q. Gong, Y. Duan and Y. Wang, “The Research of the Parallel Computing Development from the Angle of Cloud Computing”, IOP Conf. Series: Journal of Physics: Conf., 2017.
Show in Context CrossRef Google Scholar
在上下文中体现 CrossRef Google 学术搜刮
16.
Md. F. Ali and R. Zaman Khan, “Distributed Computing: AnOverview”, International Journal of Advanced Networking and Applications, vol. 07, no. 01, pp. 2630-2635, 2015.
Md. F. Ali 和 R. Zaman Khan,“分布式盘算:概述”,《国际高级网络与应用杂志》,第 07 卷,第 01 期,第 2630-2635 页,2015 年。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
Y. Sun, Z. Zhu and Z. Fan, “Distributed Caching in Wireless Cellular Networks Incorporating Parallel Processing”, IEEE Internet Computing, vol. 22, no. 1, pp. 52-61, Feb. 2018.
Y. Sun、Z. Zhu 和 Z. Fan,“Distributed Caching in Wireless Cellular Networks Incorporated Parallel Processing”,IEEE Internet Computing,第 22 卷,第 1 期,第 52-61 页,2018 年 2 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
18.
P. Srinivasa Rao, V.P.C Rao and A. Govardhan, “Dynamic Load Balancing With Central Monitoring of Distributed Job Processing System”, International Journal of Computer Applications, vol. 65, no. 21, March 2013.
P. Srinivasa Rao、V.P.C Rao 和 A. Govardhan,“Dynamic Load Balancing with Central Monitoring of Distributed Job Processing System”,《国际盘算机应用杂志》,第 65 卷,第 21 期,2013 年 3 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
A. Sharma and S. K. Peddoju, “Response Time Based Load Balancing in Cloud Computing”, 2014 International Conference on Control Instrumentation Communication and Computational Technologies (ICCICCT), July 2014.
A. Sharma 和 S. K. Peddoju,“云盘算中基于响应时间的负载平衡”,2014 年控制仪表通信和盘算技术国际会议 (ICCICCT),2014 年 7 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
H. Chen, F. Wang, N. Helian and G. Akanmu, “User-Priority Guided Min-Min Scheduling Algorithm For Load Balancing in Cloud Computing”, 2013 National Conference on Parallel Computing Technologies (PARCOMPTECH) Bangalore India, Feb 2013.
H. Chen、F. Wang、N. Helian 和 G. Akanmu,“用于云盘算负载平衡的用户优先引导的最小-最小调度算法”,2013 年天下并行盘算技术会议 (PARCOMPTECH),印度班加罗尔,2013 年 2 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
R. Kapur, “A Workload Balanced Approach for Resource Scheduling in Cloud Computing”, 2015 Eighth International Conference on Contemporary Computing (IC3) page 20-22 Noida India, Aug 2015.
R. Kapur,“A Workload Balanced Approach for Resource Scheduling in Cloud Computing”,2015 年第八届当代盘算国际会议 (IC3),第 20-22 页,印度诺伊达,2015 年 8 月。
Show in Context View Article
在上下文中体现 查看文章
Google Scholar
Google 学术搜刮
22.
B. Mondal and A. Choudhury, “Simulated Annealing (SA) based Load Balancing Strategy for Cloud Computing”, International Journal of Computer Science and Information Technologies, vol. 6, pp. 3307-3312, 2015.
B. Mondal 和 A. Choudhury,“基于模拟退火 (SA) 的云盘算负载均衡策略”,《国际盘算机科学与信息技术杂志》,第 6 卷,第 3307-3312 页,2015 年。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
23.
S.K.S. Kumar and P. Balasubramanie, “Cloud Scheduling Using Mumbai Dabbawala”, International Journal of Computer Science and Mobile Computing, vol. 4, no. 10, October 2015.
S.K.S. Kumar 和 P. Balasubramanie,“Cloud Scheduling Using Mumbai Dabbawala”,International Journal of Computer Science and Mobile Computing,第 4 卷,第 10 期,2015 年 10 月。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
24.
N. J. Kansal and I. Chana, “Existing Load balancing techniques in cloud computing: A SYSTEMATIC REVIEW”, Journal of Information Systems and Communication, vol. 3, no. 1, pp. 87-91, 2012.
N. J. Kansal 和 I. Chana,“云盘算中的现有负载均衡技术:系统综述”,《信息系统与通信杂志》,第 3 卷,第 1 期,第 87-91 页,2012 年。
Show in Context Google Scholar
在上下文中体现 Google 学术搜刮
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |