juskirat2000
Posts: 13
Joined: Tue Jan 05, 2021 6:06 am

Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Wed Jun 23, 2021 8:33 am

Hi! I am trying to run the ultralytics/yolov5 on my Raspberry Pi 4 I have custom-trained the dataset successfully and trying to measure the performance by running the test.py but on running the command

Code: Select all

python3 test.py --data data.yaml
I get the below error.

Error is:

Code: Select all

test: data=data.yaml, weights=yolov5s.pt, batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/test, name=exp, exist_ok=False, half=False
YOLOv5 🚀 2021-6-23 torch 1.7.0a0+57bffc3 CPU

Fusing layers... 
Model Summary: 224 layers, 7266973 parameters, 0 gradients
val: Scanning 'valid/labels.cache' images and labels... 29 found,
val: Scanning 'valid/labels.cache' images and labels... 29 found,
val: Scanning 'valid/labels.cache' images and labels... 29 found,
               Class     Images     Labels          P          R Segmentation fault (core dumped)

Please let me know that what can be the cause of this error? FYI, I am meeting all the requirements that are mentioned within the requirements.txt

My Raspberry Pi is of 4 GB RAM and 64 GB Disk space

User avatar
topguy
Posts: 7189
Joined: Tue Oct 09, 2012 11:46 am
Location: Trondheim, Norway

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Wed Jun 23, 2021 8:41 am

To check if its a memory limit problem you should run "htop" ( sudo apt install htop ) in another terminal window and observe the memory usage.
You dont say how fast it crashes after start so hard for me to know if its observable.

Or maybe you have found a bug in Pyton3 or more likely in torch.

juskirat2000
Posts: 13
Joined: Tue Jan 05, 2021 6:06 am

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Wed Jun 23, 2021 9:29 am

Yes, I have RAM available.
I am not sure what is the exact issue but I have been coming across this with any dataset.

What issue do you think would be with python?
I am using python3.8.5 with pytorch1.7.0 and torchvision 0.8.1

User avatar
topguy
Posts: 7189
Joined: Tue Oct 09, 2012 11:46 am
Location: Trondheim, Norway

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Wed Jun 23, 2021 12:23 pm

Segmentation fault (core dumped)
This only tells us that some machine code tried to access memory that was not allocated to that process.

Exactly which process/thread it was can only be deduced by looking at the "core" file https://embeddedbits.org/linux-core-dump-analysis/
Unless there is a stack-trace in some logfile, or you did not include it in your snippet..

juskirat2000
Posts: 13
Joined: Tue Jan 05, 2021 6:06 am

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Wed Jun 23, 2021 5:50 pm

The issue is here in these files but I don't know how to rectify them or what things to take into consideration:

It the yolov3 part of ultralytics, not the yolov5 - ultralytics/yolov3

Code: Select all

ubuntu@ubuntu:~/Desktop/yolov3$ python3 test.py --data data.yaml
Namespace(augment=False, batch_size=32, conf_thres=0.001, data='data.yaml', device='', exist_ok=False, img_size=640, iou_thres=0.6, name='exp', project='runs/test', save_conf=False, save_hybrid=False, save_json=False, save_txt=False, single_cls=False, task='val', verbose=False, weights='yolov3.pt')
YOLOv3 :rocket: v9.5.0-13-g1be3170 torch 1.7.0a0+57bffc3 CPU
Fusing layers... 
Model Summary: 261 layers, 61922845 parameters, 0 gradients
val: Scanning 'valid/labels.cache' images and labels... 29 found,
val: Scanning 'valid/labels.cache' images and labels... 29 found,
val: Scanning 'valid/labels.cache' images and labels... 29 found,
               Class      Images      Labels           P         
Traceback (most recent call last):
  File "test.py", line 316, in <module>
    test(opt.data,
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "test.py", line 111, in test
    out, train_out = model(img, augment=augment)  # inference and training outputs
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/Desktop/yolov3/models/yolo.py", line 121, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/ubuntu/Desktop/yolov3/models/yolo.py", line 152, in forward_once
    x = m(x)  # run
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/Desktop/yolov3/models/common.py", line 45, in fuseforward
    return self.act(self.conv(x))
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 423, in forward
    return self._conv_forward(input, self.weight)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
    return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: [enforce fail at CPUAllocator.cpp:58] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 838139904 bytes. Error code 12 (Cannot allocate memory)
Please let me know if you can help it rectify that. That would be of great help. Thanks!

User avatar
Paeryn
Posts: 3305
Joined: Wed Nov 23, 2011 1:10 am
Location: Sheffield, England

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Wed Jun 23, 2021 6:57 pm

That looks like the process ran out of memory and couldn't allocate more (it fails allocating a single ~800MB block). Are you running this on a 32-bit or 64-bit OS? If it's 32-bit then the maximum amount of memory a single process can have is 3GB so you can hit the limit even when you seem to have enough free RAM.
She who travels light — forgot something.
Please note that my name doesn't start with the @ character so can people please stop writing it as if it does!

juskirat2000
Posts: 13
Joined: Tue Jan 05, 2021 6:06 am

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Thu Jun 24, 2021 5:02 am

Yes, I am running this on 32 bit OS which is Ubuntu 20.04 on Raspberry Pi. I also tried with the swap but that didn't work for me actually.

So what should I do now? Please let me know if possible in exact steps that would be of great help.

dbrion06
Posts: 505
Joined: Tue May 28, 2019 11:57 am

Re: Segmentation Fault error( core dumped) while running Yolov5 on a custom dataset on Raspberry Pi 4

Thu Jun 24, 2021 5:38 am

There are two solutions:
* buy a new 64 bits PC (seems absurd, but, as PC are sometimes faster -and have GPUs- than RPi, you can train DNNs (takes weeks on RPis)
* s&ave your work, install a 64 bits Rapsbian (I am very satisfied with mine)

swap cannot change anything to the 3 G limit per process.

Return to “Python”