|
0、AIBOX运行uname -v 显示
#3 SMP Tue Apr 23 10:23:57 CST 2024
1、通过”快速部署 Llama3(https://wiki.t-firefly.com/AIBOX-1684X/quick-llama3.html)“
部署完成后可以正常运行。运行模型为llama3-8b_int4_1dev_256.bmodel
2、因为提示词工程需要,换了新编的模型 llama3-8b_int8_1dev_4096.bmodel
出现如下错误。
Device [ 0 ] loading ....
[BMRT][bmcpu_setup:436] INFO:cpu_lib 'libcpuop.so' is loaded.
bmcpu init: skip cpu_user_defined
open usercpu.so, init user_cpu_init
[BMRT][BMProfile:60] INFOrofile For arch=3
[BMRT][BMProfileDeviceBase:190] INFO:gdma=0, tiu=0, mcu=0
Model[/home/linaro/llama3/bmodels/llama3-8b_int4_1dev_256.bmodel] loading ....
[BMRT][load_bmodel:1696] INFOoading bmodel from [/home/linaro/llama3/bmodels/llama3-8b_int4_1dev_256.bmodel]. Thanks for your patience...
[BMRT][load_bmodel:1583] INFO:Bmodel loaded, version 2.2+v1.8.beta.0-89-g32b7f39b8-20240612
[BMRT][load_bmodel:1585] INFO:pre net num: 0, load net num: 69
[BMRT][load_tpu_module:1674] INFO:loading firmare in bmodel
[BMRT][preload_funcs:1876] INFO: core_id=0, multi_fullnet_func_id=22
[BMRT][preload_funcs:1879] INFO: core_id=0, dynamic_fullnet_func_id=23
[bmlib_memory][error] bm_alloc_gmem failed, dev_id = 0, size = 0xd05a000
[BM_CHECK][error] BM_CHECK_RET fail /workspace/libsophon/bmlib/src/bmlib_memory.cpp: sg_malloc_device_byte_heap_mask: 729
[BMRT][Register:1776] FATAL:coeff alloc failed, size[0xd05a000]
python3: /home/linaro/llama3/Llama3/python_demo/chat.cpp:128: void Llama3::init(const std::vector<int>&, std::string): Assertion `true == ret' failed.
(附:log中显示名字为int4是因为程序写死了,故重命名llama3-8b_int8_1dev_4096.bmodel-> 为llama3-8b_int4_1dev_256.bmodel,实际模型是 llama3-8b_int8_1dev_4096.bmodel)
3、请问AIBOX-1684X是否支持llama3-8b_int8_1dev_4096.bmodel,如果不支持,那支持int4 4096的吗?
|
|