PARALLELISM IN HASKELL(Kathleen Fisher)

Cs242 PARALLELISM IN HASKELL Kathleen Fisher Reading:A Tutorial on Parallel and Concurrent Programming in Haskell Skip Section 5 on STM Thanks to Simon Peyton Jones,Satnam Singh,and Don Stewart for these slides
Kathleen Fisher cs242 Reading: A Tutorial on Parallel and Concurrent Programming in Haskell Skip Section 5 on STM Thanksto Simon Peyton Jones, Satnam Singh, and Don Stewart for these slides

The Grand challenge a Making effective use of multi- core hardware is the challenge for programming languages now. a Hardware is getting increasingly complicated Nested memory hierarchies Hybrid processors: GPU+ CPU, Cell, FPGA Massive compute power sitting mostly idle We need new programming models to program new commodity machines effectively
Making effective use of multi-core hardware is the challenge for programming languages now. Hardware is getting increasingly complicated: - Nested memory hierarchies - Hybrid processors: GPU + CPU, Cell, FPGA... - Massive compute power sitting mostly idle. We need new programming models to program new commodity machines effectively

Candidate models in haskell Explicit threads main : Io o do ch < newChan Non-deterministic by design forkIo (ioManager ch) Monadic, forkIo and sTm i forkIo (worker 1 ch) etc Semi-implicit parallelism Deterministic Pure: par and pseqr Data parallelism Deterministic Pure: parallel arrays Shared memory initially; distributed memory eventually; possibly even GPu
▪ Explicit threads ▪ Non-deterministic by design ▪ Monadic: forkIO and STM ▪ Semi-implicit parallelism ▪ Deterministic ▪ Pure: par and pseq ▪ Data parallelism ▪ Deterministic ▪ Pure: parallel arrays ▪ Shared memory initially; distributed memory eventually; possibly even GPUs… main :: IO () = do { ch <- newChan ; forkIO (ioManager ch) ; forkIO (worker 1 ch) ... etc ... }

Parallelism vs Concurrency a parallel program exploits real parallel computing resources to run faster while computing the same answer Expectation of genuinely simultaneous execution Deterministic A concurrent program models independent agents that can communicate and synchronize Meaningful on a machine with one processor Non-deterministic
A parallel program exploits real parallel computing resources to run faster while computing the same answer. - Expectation of genuinely simultaneous execution - Deterministic A concurrent program models independent agents that can communicate and synchronize. - Meaningful on a machine with one processor - Non-deterministic

Haskell Execution Model Thunk Pointer to the b implementation 10 I Values for free variables Storage slot for the result fib 0=0 fib 1=1 fib n =fib (n-1)+ fib(n-2)
fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) 10 9 8 3 5 8 6 5 8 1 1 “Thunk” for fib 10 Pointer to the implementation Storage slot for the result Values for free variables

Functional Programming to the rescue? No side effects makes parallelism easy right? It is always safe to speculate on pure code Execute each sub-expression in its own thread? Alas, the 80s dream does not work Far too many parallel tasks, many of which are too small to be worth the overhead of forking them Difficult/impossible for compiler to guess which are worth forking Idea: Give the user control over which expressions might run in parallel
No side effects makes parallelism easy, right? - It is always safe to speculate on pure code. - Execute each sub-expression in its own thread? Alas, the 80s dream does not work. - Far too many parallel tasks, many of which are too small to be worth the overhead of forking them. - Difficult/impossible for compiler to guess which are worth forking. Idea: Give the user control over which expressions might run in parallel

The par combinator par::a→>b->b x par y Value(ie, thunk bound to x is sparked for speculative evaluation Runtime may instantiate a spark on a thread running in parallel with the parent thread Operationally, x par y y ypically, x Is used inside y: blurRows par(mix blurcols burrOws) All parallelism built up from the par combinator
Value (ie, thunk) bound to x is sparked for speculative evaluation. Runtime may instantiate a spark on a thread running in parallel with the parent thread. Operationally, x `par` y = y Typically, x is used inside y: All parallelism built up from the par combinator. par :: a -> b -> b x `par` y blurRows `par` (mix blurCols blurRows)

Concurrency Hierarch Very very Light Sparks Light Haskell Threads Heavy OS Threads pu cpu cpucpu

The meaning of par par does not guarantee a new haskell thread It hints that it would be good to evaluate the first argument in parallel The runtime decides whether to convert spark Depending on current workload This allows par to be very cheap Programmers can use it almost anywhere Safely over-approximate program parallelism
par does not guarantee a new Haskell thread. It hints that it would be good to evaluate the first argument in parallel. The runtime decides whether to convert spark - Depending on current workload. This allows par to be very cheap. - Programmers can use it almost anywhere. - Safely over-approximate program parallelism

Example: One processor x par (y x y is evaluated x is evaluated x is sparked x fizzles
x y is evaluated y x is evaluated x x is sparked x fizzles x `par` (y + x)
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 南京大学:《数据结构 Data Structures》课程教学资源(PPT课件讲稿)第九章 排序.ppt
- 厦门大学:《大数据技术原理与应用》课程教学资源(PPT课件讲稿,2017)第9章 Spark.ppt
- 中国科学技术大学:《嵌入式系统设计》课程教学资源(PPT课件讲稿)第2章 ARM微处理器概述与编程模型(王行甫).ppt
- 电子工业出版社:《计算机网络》课程教学资源(第五版,PPT课件讲稿)第二章 物理层.ppt
- 南京大学:可信软件(PPT讲稿)认识、度量与评估.ppt
- 《C语言程序设计》课程电子教案(PPT课件讲稿)第六章 函数.ppt
- “互联网+”与“+互联网”(PPT讲稿).pptx
- 《Computer Networking:A Top Down Approach》英文教材教学资源(PPT课件讲稿,6th edition)Chapter 6 无线和移动网络 Wireless and Mobile Networks.ppt
- 面向服务的业务流程管理(PPT讲稿)Introduction to Business Process Management(BPM).pptx
- 《图像处理与计算机视觉 Image Processing and Computer Vision》课程教学资源(PPT课件讲稿)Chapter 04 Feature extraction and tracking.pptx
- 香港科技大学:Advanced Topics in Next Generation Wireless Networks.ppt
- 《Java语言程序设计》课程教学资源(PPT课件讲稿)第三章 Java面向对象程序设计.ppt
- 《数据结构》课程教学资源(PPT课件讲稿)第六章 树与二叉树(6.1-6.3).ppt
- 四川大学:《Linux操作系统》课程教学资源(PPT课件讲稿)第2章 Linux操作系统管理基础.ppt
- 厦门大学:《数据库系统原理》课程教学资源(PPT课件讲稿,2016版)第五章 数据库完整性.ppt
- 《计算机视觉》课程教学资源(PPT课件讲稿)边缘和线特征提取.ppt
- 中国科学技术大学:《计算机体系结构》课程教学资源(PPT课件讲稿)Chapter 01 量化设计与分析基础(主讲:周学海).ppt
- Peer-to-Peer Networks:Distributed Algorithms for P2P Distributed Hash Tables.ppt
- 山西农业大学:大数据技术原理与应用(PPT讲稿)Development and application of bigdata technology.ppt
- 香港理工大学:数据仓库和数据挖掘(PPT讲稿)Data Warehousing & Data Mining.ppt
- 电子工业出版社:《计算机网络》课程教学资源(第五版,PPT课件讲稿)第八章 因特网上的音频/视频服务.ppt
- 《微机原理与接口技术》课程教学资源(PPT课件讲稿)第1章 微型计算机基础概论.ppt
- 《现代操作系统 Modern Operating Systems》课程教学资源(PPT课件讲稿,Third Edition)Chapter 10 Case Study 1 LINUX.ppt
- 《大学计算机基础》课程教学资源(PPT课件讲稿)第三章 字处理软件 Word2003.ppt
- 《软件测试》课程教学资源(PPT讲稿)集成测试.pptx
- 香港中文大学:Adaboost for building robust classifiers(PPT讲稿).pptx
- 福建工程学院:《软件工程》课程教学资源(实验指导书).doc
- 《数据结构》课程教学资源(PPT课件讲稿)第九章 排序.ppt
- 《图像处理与计算机视觉 Image Processing and Computer Vision》课程教学资源(PPT课件讲稿)Chapter 02 Image processing and computer vision(Camera models and parameters).pptx
- 四川大学:软件设计工具(PPT课件讲稿)Software design tool.ppt
- Homomorphic Secret Sharing:Low-End HSS from OWF、HSS for Branching Programs from DDH、The HSS Construction.ppsx
- 清华大学:《计算机网络》课程教学资源(PPT课件讲稿)Lecture 4 Routing.pptx
- 北京航空航天大学:Graph Search - a New Paradigm for Social Computing.pptx
- 西南民族大学:《软件需求分析与总体设计》课程教学资源(PPT课件讲稿)软件总体(概要)设计.ppt
- 东南大学:《数据结构》课程教学资源(PPT课件讲稿)第五章 树(主讲:方效林).ppt
- 中国科学技术大学:《网络信息安全 NETWORK SECURITY》课程教学资源(PPT课件讲稿)第十章 入侵检测系统(主讲:肖明军).ppt
- 中国科学技术大学:QuickPass系统的排队问题(PPT讲座,谢瑶).ppt
- 《工程计算软件》课程教学资源(PPT课件讲稿)第四章 Maple简介.ppt
- 山东大学:《微机原理及单片机接口技术》课程教学资源(PPT课件讲稿)第六章 中断(主讲:刘忠国).ppt
- 中国传媒大学(北京广播学院):《计算机网络》课程教学资源(PPT课件讲稿)第五章 网络层 The Network Layer.ppt