0
  • 聊天消息
  • 系统消息
  • 评论与回复
登录后你可以
  • 下载海量资料
  • 学习在线课程
  • 观看技术视频
  • 写文章/发帖/加入社区
创作中心

完善资料让更多小伙伴认识你,还能领取20积分哦,立即完善>

3天内不再提示

怎样使用Accelerate库在多GPU上进行LLM推理呢?

冬至子 来源:思否AI 作者:思否AI 2023-12-01 10:24 次阅读

大型语言模型(llm)已经彻底改变了自然语言处理领域。随着这些模型在规模和复杂性上的增长,推理的计算需求也显著增加。为了应对这一挑战利用多个gpu变得至关重要。

所以本文将在多个gpu上并行执行推理,主要包括:Accelerate库介绍,简单的方法与工作代码示例和使用多个gpu的性能基准测试。

本文将使用多个3090将llama2-7b的推理扩展在多个GPU上

基本示例

我们首先介绍一个简单的示例来演示使用Accelerate进行多gpu“消息传递”。

from accelerate import Accelerator
 from accelerate.utils import gather_object
 
 accelerator = Accelerator()
 
 # each GPU creates a string
 message=[ f"Hello this is GPU {accelerator.process_index}" ] 
 
 # collect the messages from all GPUs
 messages=gather_object(message)
 
 # output the messages only on the main process with accelerator.print() 
 accelerator.print(messages)

输出如下:

['Hello this is GPU 0', 
   'Hello this is GPU 1', 
   'Hello this is GPU 2', 
   'Hello this is GPU 3', 
   'Hello this is GPU 4']

多GPU推理

下面是一个简单的、非批处理的推理方法。代码很简单,因为Accelerate库已经帮我们做了很多工作,我们直接使用就可以:

from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json
 
 accelerator = Accelerator()
 
 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10
 
 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 
 # sync GPUs and start the timer
 accelerator.wait_for_everyone()
 start=time.time()
 
 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     # store output of generations in dict
     results=dict(outputs=[], num_tokens=0)
 
     # have each GPU do inference, prompt by prompt
     for prompt in prompts:
         prompt_tokenized=tokenizer(prompt, return_tensors="pt").to("cuda")
         output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0]
 
         # remove prompt from output 
         output_tokenized=output_tokenized[len(prompt_tokenized["input_ids"][0]):]
 
         # store outputs and number of tokens in result{}
         results["outputs"].append( tokenizer.decode(output_tokenized) )
         results["num_tokens"] += len(output_tokenized)
 
     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly
 
 # collect results from all the GPUs
 results_gathered=gather_object(results)
 
 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])
 
     print(f"tokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)}")

使用多个gpu会导致一些通信开销:性能在4个gpu时呈线性增长,然后在这种特定设置中趋于稳定。当然这里的性能取决于许多参数,如模型大小和量化、提示长度、生成的令牌数量和采样策略,所以我们只讨论一般的情况

1 GPU: 44个token /秒,时间:225.5s

2 gpu: 88个token /秒,时间:112.9s

3 gpu: 128个token /秒,时间:77.6s

4 gpu: 137个token /秒,时间:72.7s

5 gpu: 119个token /秒,时间:83.8s

在多GPU上进行批处理

现实世界中,我们可以使用批处理推理来加快速度。这会减少GPU之间的通讯,加快推理速度。我们只需要增加prepare_prompts函数将一批数据而不是单条数据输入到模型即可:

from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json
 
 accelerator = Accelerator()
 
 def write_pretty_json(file_path, data):
     import json
     with open(file_path, "w") as write_file:
         json.dump(data, write_file, indent=4)
 
 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10
 
 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 tokenizer.pad_token = tokenizer.eos_token
 
 # batch, left pad (for inference), and tokenize
 def prepare_prompts(prompts, tokenizer, batch_size=16):
     batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]  
     batches_tok=[]
     tokenizer.padding_side="left"     
     for prompt_batch in batches:
         batches_tok.append(
             tokenizer(
                 prompt_batch, 
                 return_tensors="pt", 
                 padding='longest', 
                 truncation=False, 
                 pad_to_multiple_of=8,
                 add_special_tokens=False).to("cuda") 
             )
     tokenizer.padding_side="right"
     return batches_tok
 
 # sync GPUs and start the timer
 accelerator.wait_for_everyone()    
 start=time.time()
 
 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     results=dict(outputs=[], num_tokens=0)
 
     # have each GPU do inference in batches
     prompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16)
 
     for prompts_tokenized in prompt_batches:
         outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100)
 
         # remove prompt from gen. tokens
         outputs_tokenized=[ tok_out[len(tok_in):] 
             for tok_in, tok_out in zip(prompts_tokenized["input_ids"], outputs_tokenized) ] 
 
         # count and decode gen. tokens 
         num_tokens=sum([ len(t) for t in outputs_tokenized ])
         outputs=tokenizer.batch_decode(outputs_tokenized)
 
         # store in results{} to be gathered by accelerate
         results["outputs"].extend(outputs)
         results["num_tokens"] += num_tokens
 
     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly
 
 # collect results from all the GPUs
 results_gathered=gather_object(results)
 
 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])
 
     print(f"tokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens}")

可以看到批处理会大大加快速度。

1 GPU: 520 token /sec,时间:19.2s

2 gpu: 900 token /sec,时间:11.1s

3 gpu: 1205个token /秒,时间:8.2s

4 gpu: 1655 token /sec,时间:6.0s

5 gpu: 1658 token /sec,时间:6.0s

总结

截止到本文为止,llama.cpp,ctransformer还不支持多GPU推理,好像llama.cpp在6月有个多GPU的merge,但是我没看到官方更新,所以这里暂时确定不支持多GPU。如果有小伙伴确认可以支持多GPU请留言。

huggingface的Accelerate包则为我们使用多GPU提供了一个很方便的选择,使用多个GPU推理可以显着提高性能,但gpu之间通信的开销随着gpu数量的增加而显著增加。

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉
  • GPU芯片
    +关注

    关注

    1

    文章

    291

    浏览量

    5692
  • 自然语言处理

    关注

    1

    文章

    509

    浏览量

    13103
  • LLM
    LLM
    +关注

    关注

    0

    文章

    202

    浏览量

    233
收藏 人收藏

    评论

    相关推荐

    对比解码在LLM上的应用

    为了改进LLM推理能力,University of California联合Meta AI实验室提出将Contrastive Decoding应用于多种任务的LLM方法。实验表明,所提方法能有效改进
    发表于 09-21 11:37 372次阅读
    对比解码在<b class='flag-5'>LLM</b>上的应用

    怎样对STM32标准的SlaveMode选项进行设置

    怎样对STM32标准的SlaveMode选项进行设置?有哪些设置步骤?
    发表于 11-24 07:51

    YOLOv5s算法RK3399ProD上的部署推理流程是怎样

    YOLOv5s算法RK3399ProD上的部署推理流程是怎样的?基于RK33RK3399Pro怎样使用NPU进行加速
    发表于 02-11 08:15

    RK3399Pro硬件上进行make编译报错怎么办

    RK3399Pro硬件上进行make编译报错怎么办?怎样去解决这个问题
    发表于 02-14 06:11

    如何对RK GPU进行调试

    如何对RK GPU进行调试?有哪些技巧
    发表于 02-15 07:21

    怎样阿里云物联网平台上进行单片机程序的编写

    阿里云物联网平台是怎样设置的?怎样阿里云物联网平台上进行单片机程序的编写
    发表于 02-22 06:04

    充分利用Arm NN进行GPU推理

    Arm拥有跨所有处理器的计算IP。而且,无论您要在GPU,CPU还是NPU上进行ML推理,都可以一个通用框架下使用它们:Arm NN。Arm NN是适用于CPU,
    发表于 04-11 17:33

    请问RK3399pro中间计算时能否调用GPU的一些现成数据或函数来计算

    一些操作,放射变换,旋转等,请问这中间计算能否调用GPU的一些现成数据或函数来计算? 如何调用? 谢谢!
    发表于 05-09 15:26

    请问一下rknn推理参数该怎样去设置

    rknn推理参数设置然后进行推理推理的结果会把三张图片的结果合并在一个list中,需要我们自己将其分割开:最终其结果和单张
    发表于 07-22 15:38

    如何判断推理何时由GPU或NPUiMX8MPlus上运行?

    当我为 TFLite 模型运行基准测试时,有一个选项 --nnapi=true我如何知道 GPU 和 NPU 何时进行推理?谢谢
    发表于 03-20 06:10

    PyTorch教程13.5之在多个GPU上进行训练

    电子发烧友网站提供《PyTorch教程13.5之在多个GPU上进行训练.pdf》资料免费下载
    发表于 06-05 14:18 0次下载
    PyTorch教程13.5之在多个<b class='flag-5'>GPU</b><b class='flag-5'>上进行</b>训练

    mlc-llm对大模型推理的流程及优化方案

    比如RWKV和给定的device信息一起编译为TVM中的runtime.Module(在linux上编译的产物就是.so文件)提供mlc-llm的c++推理接口调用 。
    发表于 09-26 12:25 448次阅读
    mlc-<b class='flag-5'>llm</b>对大模型<b class='flag-5'>推理</b>的流程及优化方案

    Hugging Face LLM部署大语言模型到亚马逊云科技Amazon SageMaker推理示例

     本篇文章主要介绍如何使用新的Hugging Face LLM推理容器将开源LLMs,比如BLOOM大型语言模型部署到亚马逊云科技Amazon SageMaker进行推理的示例。我们将
    的头像 发表于 11-01 17:48 467次阅读
    Hugging Face <b class='flag-5'>LLM</b>部署大语言模型到亚马逊云科技Amazon SageMaker<b class='flag-5'>推理</b>示例

    自然语言处理应用LLM推理优化综述

    当前,业界在将传统优化技术引入 LLM 推理的同时,同时也在探索从大模型自回归解码特点出发,通过调整推理过程和引入新的模型结构来进一步提升推理性能。
    发表于 04-10 11:48 81次阅读
    自然语言处理应用<b class='flag-5'>LLM</b><b class='flag-5'>推理</b>优化综述

    利用NVIDIA组件提升GPU推理的吞吐

    本实践中,唯品会 AI 平台与 NVIDIA 团队合作,结合 NVIDIA TensorRT 和 NVIDIA Merlin HierarchicalKV(HKV)将推理的稠密网络和热 Embedding 全置于 GPU 上进行
    的头像 发表于 04-20 09:39 168次阅读