Advanced Search
Article Contents
Article Contents

MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics

  • * Corresponding author: Jiannong Cao

    * Corresponding author: Jiannong Cao
Abstract Full Text(HTML) Figure(13) / Table(10) Related Papers Cited by
  • The computation core of many big data applications can be expressed as general matrix computations, including linear algebra operations and irregular matrix operations. However, existing parallel programming systems such as Spark do not have programming abstraction and efficient implementation for general matrix computations. In this paper, we present MatrixMap, a unified and efficient data-parallel programming framework for general matrix computations. MatrixMap provides powerful yet simple abstraction, consisting of a distributed in-memory data structure called bulk key matrix and a programming interface defined by matrix patterns. Users can easily load data into bulk key matrices and program algorithms into parallel matrix patterns. MatrixMap outperforms current state-of-the-art systems by employing three key techniques: matrix patterns with lambda functions for irregular and linear algebra matrix operations, asynchronous computation pipeline with context-aware data shuffling strategies for specific matrix patterns and in-memory data structure reusing data in iterations. Moreover, it can automatically handle the parallelization and distribute execution of programs on a large cluster. The experiment results show that MatrixMap is 12 times faster than Spark.

    Mathematics Subject Classification: Primary:68N15, 68Nxx; Secondary:68Txx.


    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  MatrixMap Framework

    Figure 2.  Matrix Plus Pattern

    Figure 3.  Matrix Multiply Pattern

    Figure 4.  Matrix Join Pattern

    Figure 5.  System Architecture

    Figure 6.  Data flowchart of MatrixMap framework

    Figure 7.  Asynchronous Computing Process

    Figure 8.  CSR Format

    Figure 9.  Key-CSR Format

    Figure 10.  Breadth-First Search in Matrix Operations

    Figure 11.  Runtime comparison between MatrixMap and other programming models in non-iterative algorithms

    Figure 12.  Runtime comparison between MatrixMap and other programming models in iterative algorithms

    Figure 13.  Scalability in PageRank Run Time

    Table Listing 1.  Matrix Interface

    class BKM {
    // Matrix Patterns

    // Matrix supporting method
    BKM(string file_name);
    Load(string file_name);
    void map(string, string, Context);
    void reduce(string, Iterable < int>, Context);
    float multiply(float, float);
    float plus(float, float);
    bool join(float, float);
     | Show Table
    DownLoad: CSV

    Table Listing 2.  WordCount Code

    BKM m("wordcount.txt");
    m.Map([](string key, string word, Context c){
    c.Insert(word, 1);})
    .Sort().Reduce([](string key, Iterable < Int> i,
    Context c) {
    int sum = 0;
    for (int e: r) {
    sum += e;
    context.Insert(key, sum);
     | Show Table
    DownLoad: CSV

    Table Listing 3.  Inner Join

    BKM matrix1("matrix1.data");
    BKM matrix2("matrix2.data");
    [](float key1, float key2) {
    return key1 == key2;
     | Show Table
    DownLoad: CSV

    Table Listing 4.  Logistic Regression

    BKM data("points.data");
    BKM weights, label, error;
    BKM temp;
    int iterations = 100;
    for (int i = 0; i < iterations; ++i) {
    temp = data.Multiply(weights)
    float h = sigmoid(temp);
    error = label.Plus(h);
    temp = data.Multiply(error);
    temp = temp.Multiply(alpha);
    weights = temp.Plus(weights);
     | Show Table
    DownLoad: CSV

    Table Listing 5.  K-Means

    BKM centroids;
    int iterations = 100;
    for (int i = 0; i < iteraions; ++i) {
    point.Map([](string key, vector < double> point,
    Context c) {
    BKM temp = point.Multiply(centroids);
    int index = min_index(temp);
    c.Insert(index, point);
    }).Reduce([](string key, Iterable < double> i,
    Contex c){
    centroids = c.insert(key, average(point)).Dump();
     | Show Table
    DownLoad: CSV

    Table Listing 6.  Alternating Least Squares

    BKM m("r.data");
    BKM u, r, error;
    int iteration 100;
    for (int i = 0; i < iterations; ++i) {
    BKM temp = m.Multiply(u);
    error = r.Plus(temp);
    temp = m.Multiply(error);
    temp = temp.Multiply(alpha);
    u = temp.Plus(u);
    temp = u.Multiply(m);
    error = r.Plus(temp);
    temp = u.Multiply(error);
    temp = temp.Multiply(alpha);
    m = temp.Plus(m);
     | Show Table
    DownLoad: CSV

    Table Listing 7.  Breadth-first Search

    BKM graph("graph.data");
    BKM trace;
     | Show Table
    DownLoad: CSV

    Table Listing 8.  Graph Merge

    BKM A("a.data");
    BKM B("b.data");
    BKM C = A.Plus(B,
    [](float a, float b){
    if (a!= 0) return a;
    else if(b!= 0)return b;
    else return 0;
     | Show Table
    DownLoad: CSV

    Table Listing 9.  All Pair Shortest Path

    BKM W("graph.data");
    int iteration = W.GetRows();
    for(int i = 0; i < n -1; i = 2*i){
    W = W.Multiply(W,
    [](float x, float y) {
    return min(x+y, x);
     | Show Table
    DownLoad: CSV

    Table Listing 10.  PageRank

    BKM M("web.data");
    BKM r_new, r_old;
    int iterations = 100;
    for(int i = 0; i < iterations; ++i){
    r_new = M.Multiply(r_old);
    r_old = r_new;
     | Show Table
    DownLoad: CSV
  • [1] C. -C. Chang and Chih-Jen}, libsvm dataset url: http://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/binary/news20.binary.bz2, 2015.
    [2] J. ChoiJ.J. DongarraR. Pozo and D.W. Walker, ScaLAPACK: A scalable linear algebra library for distributed memory concurrent computers, in Frontiers of Massively Parallel Computation, 1992., Fourth Symposium on the, IEEE, (1992), 120-127. 
    [3] Chu, Cheng-Tao and Kim, Sang Kyun and Lin, Yi-An and Yu, YuanYuan and Bradski, Gary and Ng, Andrew Y and Olukotun, Kunle, {Map-Reduce for Machine Learning on Multicore}, in Neural Information Processing Systems, 2007.
    [4] M.T. Chu and J.L. Watterson, On a multivariate eigenvalue problem, Part I: Algebraic theory and a power method, SIAM Journal on Scientific Computing, 14 (1993), 1089-1106.  doi: 10.1137/0914066.
    [5] T. H. Cormen, Introduction to Algorithms, MIT press, 2009.
    [6] J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, Communications of the ACM, 51 (2008), 107-113.  doi: 10.1145/1327452.1327492.
    [7] J. EkanayakeH. Li and B. Zhang, Twister: A runtime for iterative MapReduce, HPDC '10 Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, (2010), 810-818.  doi: 10.1145/1851476.1851593.
    [8] J. GonzalezY. LowH. GuD. Bickson and C. Guestrin, PowerGraph: Distributed graph-parallel computation on natural graphs, OSDI'12 Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, (2012), 17-30. 
    [9] P. Harrington, Machine Learning in Action, Manning Publications, 2012.
    [10] P. Hintjens, ZeroMQ: Messaging for Many Applications, O'Reilly Media, Inc. , 2013.
    [11] Intel, Threading Building Blocks url: https://www.threadingbuildingblocks.org/, 2009.
    [12] M. IsardM. BudiuY. YuA. Birrell and D. Fetterly, Dryad: distributed data-parallel programs from sequential building blocks, ACM SIGOPS Operating Systems Review, 41 (2007), 59-72.  doi: 10.1145/1272996.1273005.
    [13] Join (SQL) url: https://en.wikipedia.org/wiki/Join, 2015.
    [14] J. Kepner and J. Gilbert, Graph Algorithms in the Language of Linear Algebra, SIAM, 2011.
    [15] K. KourtisV. KarakasisG. Goumas and N. Koziris, CSX: An extended compression format for spmv on shared memory systems, in ACM SIGPLAN Notices, 46 (2011), 247-256.  doi: 10.1145/1941553.1941587.
    [16] J. Kowalik, ACTORS: A model of concurrent computation in distributed systems (Gul Agha), SIAM Review, 30 (1988), 146-146.  doi: 10.1137/1030027.
    [17] C.G. Aapo Kyrola and G. Blelloch, GraphChi: Large-scale graph computation on just a PC, in Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, USENIX Association, (2012), 31-46. 
    [18] Y. Low, J. Gonzalez and A. Kyrola, Graphlab: A distributed framework for machine learning in the cloud, arXiv preprint, arXiv: 1107. 0922, 1107 (2011).
    [19] Y. LowJ. GonzalezA. KyrolaD. BicksonC. Guestrin and J.M. Hellerstein, Distributed GraphLab: A Framework for Machine Learning and Data Mining in the Cloud, Proceedings of the VLDB Endowment, 5 (2012), 716-727.  doi: 10.14778/2212351.2212354.
    [20] G. MalewiczM. Austern and A. Bik, Pregel: A system for large-scale graph processing, Proceedings of the the 2010 international conference on Management of data, 114 (2010), 135-145. 
    [21] D. MurrayF. McSherryR. IsaacsM. IsardP. Barham and M. Abadi, Naiad: A timely dataflow system, SOSP '13: Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, (2013), 439-455.  doi: 10.1145/2517349.2522738.
    [22] E.J. O'NeilP.E. O'Neil and G. Weikum, The LRU-K page replacement algorithm for database disk buffering, in ACM SIGMOD Record, 22 (1993), 297-306. 
    [23] T. W. L Page, S Brin, R Motwani, The PageRank Citation Ranking: Bringing Order to the Web, tech. rep. , Stanford InfoLab, 1999.
    [24] R. Power and J. Li, {Piccolo: Building fast, distributed programs with partitioned tables, Proceedings of the 9th USENIX conference on Operating systems design and implementation -OSDI'10, (2010), 1-14. 
    [25] J. Protic, M. Tomasevic and V. Milutinović, Distributed Shared Memory: Concepts and Systems, John Wiley & Sons, 1998.
    [26] Z. QianX. ChenN. Kang and M. Chen, MadLINQ: large-scale distributed matrix computation for the cloud, Proceedings of the 7th ACM european conference on Computer Systems. ACM, (2012), 197-210,.  doi: 10.1145/2168836.2168857.
    [27] RocksDB, http://rocksdb.org/, 2015.
    [28] A. RoyI. Mihailovic and W. Zwaenepoel, X-stream: edge-centric graph processing using streaming partitions, the Twenty-Fourth ACM Symposium on Operating Systems Principles, (2013), 472-488.  doi: 10.1145/2517349.2522740.
    [29] S. SeoE.J. YoonJ. KimS. JinJ.-S. Kim and S. Maeng, HAMA: An efficient matrix computation with the mapreduce framework, in 2010 IEEE Second International Conference on Cloud Computing Technology and Science, (2010), 721-726.  doi: 10.1109/CloudCom.2010.17.
    [30] J. Shun and G. Blelloch, Ligra: A lightweight graph processing framework for shared memory, in PPoPP, (2013), 135-146.  doi: 10.1145/2442516.2442530.
    [31] M. S. Snir, S. W. Otto, D. W. Walker, J. Dongarra and Huss-Lederman, MPI: The Complete Reference, MIT Press, 1995.
    [32] L. Valiant, A bridging model for parallel computation, Communications of the ACM, 33 (1990), 103-111.  doi: 10.1145/79173.79181.
    [33] P. Vassiliadis, A survey of extract-transform-load technology, International Journal of Data Warehousing and Mining, 5 (), 1-27.  doi: 10.4018/978-1-60960-537-7.ch008.
    [34] S. Venkataraman, E. Bodzsar, I. Roy, A. AuYoung, and R. S. Schreiber, Presto in Proceedings of the 8th ACM European Conference on Computer Systems -EuroSys '13, (2013), p197.
    [35] R. S. Xin, J. E. Gonzalez, M. J. Franklin, I. Stoica, and E. AMPLab, GraphX: A Resilient Distributed Graph System on Spark in First International Workshop on Graph Data Management Experiences and Systems, p. 2,2013.
    [36] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker and I. Stoica, Spark: Cluster computing with working sets, HotCloud'10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, (2010), p10.
    [37] M. Zaharia, M. Chowdhury, T. Das and A. Dave, Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, tech. rep. , UCB/EECS-2011-82 UC Berkerly, 2012.
    [38] T. Zhang, Solving large scale linear prediction problems using stochastic gradient descent algorithms in Proceedings of the twenty-first international conference on Machine learning, ACM, (2004), p116.
    [39] Y. ZhouD. WilkinsonR. Schreiber and R. Pan, Large-scale parallel collaborative filtering for the netflix prize, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), LNCS, 5034 (2008), 337-348.  doi: 10.1007/978-3-540-68880-8_32.
  • 加载中




Article Metrics

HTML views(1192) PDF downloads(179) Cited by(0)

Access History

Other Articles By Authors



    DownLoad:  Full-Size Img  PowerPoint