satadru
Posts: 14
Joined: Thu Apr 18, 2013 5:18 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Sat Aug 17, 2019 10:18 pm

There has been some discussion here asking about WORKING devices.

Here's what I have working:

/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M
ID 1d6b:0003 Linux Foundation 3.0 root hub
|__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M
ID 152d:0583 JMicron Technology Corp. / JMicron USA Technology Corp.

That is a Sabrent USB 3.1 Aluminum Enclosure for M.2 NVMe SSD in Gray (EC-NVME)
https://smile.amazon.com/gp/product/B07 ... UTF8&psc=1

I'm using that with the Crucial P1 500GB 3D NAND NVMe PCIe M.2 SSD - CT500P1SSD8
https://smile.amazon.com/gp/product/B07 ... UTF8&psc=1



Here is some simple benchmarking with Linux kernel 4.19.66-v8-gfc5826fb9 running in aarch64 to a zfs volume running on ubuntu 19.10, while I had a kernel compile going in the background inside a docker container:

Code: Select all

dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0 records in
1024+0 records out
402653184 bytes (403 MB, 384 MiB) copied, 0.635668 s, [b]633 MB/s[/b]

Code: Select all

sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k  && sync" ; rm testfile 
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.26899 s, [b]390 MB/s[/b]

real	0m0.294s
user	0m0.008s
sys	0m0.261s
Also, some suggested benchmarking as per https://askubuntu.com/a/991311/844422 :

Sequential READ speed with big blocks:
READ: bw=732MiB/s (767MB/s)

Code: Select all

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [R(1)][50.0%][r=765MiB/s][r=765 IOPS][eta 00m:07s]
Jobs: 1 (f=1): [R(1)][93.3%][r=753MiB/s][r=753 IOPS][eta 00m:01s] 
Jobs: 1 (f=1): [R(1)][100.0%][r=757MiB/s][r=756 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=6927: Sat Aug 17 18:05:23 2019
  read: IOPS=731, BW=732MiB/s (767MB/s)(10.0GiB/13991msec)
    slat (usec): min=990, max=10929, avg=1339.89, stdev=295.52
    clat (usec): min=17, max=86994, avg=40923.69, stdev=7541.70
     lat (usec): min=1085, max=90860, avg=42265.95, stdev=7611.49
    clat percentiles (usec):
     |  1.00th=[ 5669],  5.00th=[31327], 10.00th=[36963], 20.00th=[38536],
     | 30.00th=[39584], 40.00th=[40633], 50.00th=[41681], 60.00th=[42206],
     | 70.00th=[43254], 80.00th=[44827], 90.00th=[46924], 95.00th=[49021],
     | 99.00th=[56886], 99.50th=[63177], 99.90th=[71828], 99.95th=[78119],
     | 99.99th=[85459]
   bw (  KiB/s): min=591872, max=804864, per=99.49%, avg=745677.63, stdev=40922.31, samples=27
   iops        : min=  578, max=  786, avg=728.07, stdev=40.02, samples=27
  lat (usec)   : 20=0.17%, 50=0.04%
  lat (msec)   : 2=0.21%, 4=0.33%, 10=0.97%, 20=1.54%, 50=93.30%
  lat (msec)   : 100=3.45%
  cpu          : usr=2.02%, sys=96.91%, ctx=1226, majf=0, minf=8203
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=732MiB/s (767MB/s), 732MiB/s-732MiB/s (767MB/s-767MB/s), io=10.0GiB (10.7GB), run=13991-13991msec
Sequential WRITE speed with big blocks:
WRITE: bw=209MiB/s (219MB/s)

Code: Select all

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [W(1)][12.2%][w=233MiB/s][w=232 IOPS][eta 00m:43s]
Jobs: 1 (f=1): [W(1)][23.1%][w=205MiB/s][w=205 IOPS][eta 00m:40s] 
Jobs: 1 (f=1): [W(1)][33.3%][w=231MiB/s][w=231 IOPS][eta 00m:34s] 
Jobs: 1 (f=1): [W(1)][45.1%][w=153MiB/s][w=152 IOPS][eta 00m:28s] 
Jobs: 1 (f=1): [W(1)][58.3%][w=212MiB/s][w=212 IOPS][eta 00m:20s] 
Jobs: 1 (f=1): [W(1)][69.4%][w=253MiB/s][w=253 IOPS][eta 00m:15s] 
Jobs: 1 (f=1): [W(1)][79.6%][w=217MiB/s][w=217 IOPS][eta 00m:10s] 
Jobs: 1 (f=1): [W(1)][91.8%][w=189MiB/s][w=189 IOPS][eta 00m:04s] 
Jobs: 1 (f=1): [W(1)][100.0%][w=191MiB/s][w=191 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=16043: Sat Aug 17 18:08:25 2019
  write: IOPS=208, BW=209MiB/s (219MB/s)(10.0GiB/49093msec); 0 zone resets
    slat (usec): min=982, max=482078, avg=4570.00, stdev=15117.52
    clat (usec): min=21, max=684582, avg=138601.34, stdev=98032.74
     lat (usec): min=1461, max=691085, avg=143178.50, stdev=99594.34
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   52], 10.00th=[   58], 20.00th=[   70],
     | 30.00th=[   82], 40.00th=[   94], 50.00th=[  111], 60.00th=[  128],
     | 70.00th=[  146], 80.00th=[  182], 90.00th=[  268], 95.00th=[  351],
     | 99.00th=[  510], 99.50th=[  567], 99.90th=[  676], 99.95th=[  684],
     | 99.99th=[  684]
   bw (  KiB/s): min=18395, max=419840, per=100.00%, avg=215432.78, stdev=81214.05, samples=96
   iops        : min=   17, max=  410, avg=210.08, stdev=79.38, samples=96
  lat (usec)   : 50=0.18%, 100=0.03%
  lat (msec)   : 2=0.08%, 4=0.14%, 10=0.49%, 20=0.75%, 50=2.51%
  lat (msec)   : 100=39.51%, 250=44.73%, 500=10.56%, 750=1.04%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=2555, max=2555, avg=2555.00, stdev= 0.00
    sync percentiles (nsec):
     |  1.00th=[ 2544],  5.00th=[ 2544], 10.00th=[ 2544], 20.00th=[ 2544],
     | 30.00th=[ 2544], 40.00th=[ 2544], 50.00th=[ 2544], 60.00th=[ 2544],
     | 70.00th=[ 2544], 80.00th=[ 2544], 90.00th=[ 2544], 95.00th=[ 2544],
     | 99.00th=[ 2544], 99.50th=[ 2544], 99.90th=[ 2544], 99.95th=[ 2544],
     | 99.99th=[ 2544]
  cpu          : usr=5.23%, sys=35.30%, ctx=29386, majf=0, minf=531
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=209MiB/s (219MB/s), 209MiB/s-209MiB/s (219MB/s-219MB/s), io=10.0GiB (10.7GB), run=49093-49093msec

Random 4K read QD1:
READ: bw=31.4MiB/s (32.0MB/s)

Code: Select all

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.12
Starting 1 process
TEST: Laying out IO file (1 file / 500MiB)
Jobs: 1 (f=1): [r(1)][11.7%][r=41.1MiB/s][r=10.5k IOPS][eta 00m:53s]
Jobs: 1 (f=1): [r(1)][21.7%][r=36.9MiB/s][r=9442 IOPS][eta 00m:47s]  
Jobs: 1 (f=1): [r(1)][31.7%][r=36.3MiB/s][r=9302 IOPS][eta 00m:41s] 
Jobs: 1 (f=1): [r(1)][41.7%][r=30.7MiB/s][r=7850 IOPS][eta 00m:35s] 
Jobs: 1 (f=1): [r(1)][51.7%][r=22.7MiB/s][r=5809 IOPS][eta 00m:29s] 
Jobs: 1 (f=1): [r(1)][61.7%][r=25.0MiB/s][r=6646 IOPS][eta 00m:23s] 
Jobs: 1 (f=1): [r(1)][71.7%][r=25.2MiB/s][r=6451 IOPS][eta 00m:17s] 
Jobs: 1 (f=1): [r(1)][81.7%][r=34.1MiB/s][r=8723 IOPS][eta 00m:11s] 
Jobs: 1 (f=1): [r(1)][91.7%][r=33.8MiB/s][r=8649 IOPS][eta 00m:05s] 
Jobs: 1 (f=1): [r(1)][100.0%][r=42.3MiB/s][r=10.8k IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=31301: Sat Aug 17 18:04:20 2019
  read: IOPS=8048, BW=31.4MiB/s (32.0MB/s)(1886MiB/60001msec)
    slat (usec): min=15, max=110833, avg=115.21, stdev=294.98
    clat (nsec): min=1759, max=60886k, avg=3927.30, stdev=101649.09
     lat (usec): min=17, max=110848, avg=119.89, stdev=313.43
    clat percentiles (nsec):
     |  1.00th=[  1896],  5.00th=[  1944], 10.00th=[  2064], 20.00th=[  2256],
     | 30.00th=[  2384], 40.00th=[  2512], 50.00th=[  2672], 60.00th=[  2896],
     | 70.00th=[  3152], 80.00th=[  3568], 90.00th=[  4576], 95.00th=[  6304],
     | 99.00th=[ 11328], 99.50th=[ 16768], 99.90th=[119296], 99.95th=[195584],
     | 99.99th=[864256]
   bw (  KiB/s): min=15936, max=49772, per=99.62%, avg=32069.38, stdev=7645.86, samples=119
   iops        : min= 3984, max=12443, avg=8017.25, stdev=1911.51, samples=119
  lat (usec)   : 2=7.83%, 4=77.78%, 10=12.92%, 20=1.04%, 50=0.23%
  lat (usec)   : 100=0.07%, 250=0.09%, 500=0.02%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=6.55%, sys=83.97%, ctx=19103, majf=5, minf=15
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=482902,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  READ: bw=31.4MiB/s (32.0MB/s), 31.4MiB/s-31.4MiB/s (32.0MB/s-32.0MB/s), io=1886MiB (1978MB), run=60001-60001msec
Mixed random 4K read and write QD1 with sync:
READ: bw=2071KiB/s (2121kB/s)
WRITE: bw=2072KiB/s (2121kB/s)

Code: Select all

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [m(1)][5.0%][r=1032KiB/s,w=1032KiB/s][r=258,w=258 IOPS][eta 00m:5Jobs: 1 (f=1): [m(1)][6.7%][r=1145KiB/s,w=1237KiB/s][r=286,w=309 IOPS][eta 00m:5Jobs: 1 (f=1): [m(1)][8.3%][r=1237KiB/s,w=1281KiB/s][r=309,w=320 IOPS][eta 00m:5Jobs: 1 (f=1): [m(1)][11.5%][r=1295KiB/s,w=1283KiB/s][r=323,w=320 IOPS][eta 00m:54s]
Jobs: 1 (f=1): [m(1)][11.7%][r=1382KiB/s,w=1583KiB/s][r=345,w=395 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][13.3%][r=1101KiB/s,w=1089KiB/s][r=275,w=272 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][15.0%][r=988KiB/s,w=980KiB/s][r=247,w=245 IOPS][eta 00m:51Jobs: 1 (f=1): [m(1)][16.7%][r=1181KiB/s,w=1137KiB/s][r=295,w=284 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][19.7%][r=1754KiB/s,w=1846KiB/s][r=438,w=461 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][20.0%][r=1603KiB/s,w=1639KiB/s][r=400,w=409 IOPS][eta 00m:48s]
Jobs: 1 (f=1): [m(1)][21.7%][r=1320KiB/s,w=1176KiB/s][r=330,w=294 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][23.3%][r=2098KiB/s,w=2110KiB/s][r=524,w=527 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][26.2%][r=2217KiB/s,w=2092KiB/s][r=554,w=523 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][27.9%][r=2218KiB/s,w=2143KiB/s][r=554,w=535 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][29.5%][r=2426KiB/s,w=2446KiB/s][r=606,w=611 IOPS][eta 00m:43s]
Jobs: 1 (f=1): [m(1)][31.1%][r=2054KiB/s,w=2046KiB/s][r=513,w=511 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][32.8%][r=2568KiB/s,w=2556KiB/s][r=642,w=639 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][35.0%][r=2372KiB/s,w=2424KiB/s][r=593,w=606 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][36.1%][r=2662KiB/s,w=2426KiB/s][r=665,w=606 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][38.3%][r=2284KiB/s,w=2312KiB/s][r=571,w=578 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][39.3%][r=2282KiB/s,w=2190KiB/s][r=570,w=547 IOPS][eta 00m:37s]
Jobs: 1 (f=1): [m(1)][41.0%][r=2218KiB/s,w=2294KiB/s][r=554,w=573 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][43.3%][r=2540KiB/s,w=2260KiB/s][r=635,w=565 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][44.3%][r=2114KiB/s,w=2274KiB/s][r=528,w=568 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][45.9%][r=2086KiB/s,w=2082KiB/s][r=521,w=520 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][47.5%][r=2296KiB/s,w=2400KiB/s][r=574,w=600 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][49.2%][r=2336KiB/s,w=2436KiB/s][r=584,w=609 IOPS][eta 00m:31s]
Jobs: 1 (f=1): [m(1)][50.8%][r=2302KiB/s,w=2266KiB/s][r=575,w=566 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][52.5%][r=2394KiB/s,w=2146KiB/s][r=598,w=536 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][54.1%][r=2244KiB/s,w=2412KiB/s][r=561,w=603 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][55.7%][r=2506KiB/s,w=2414KiB/s][r=626,w=603 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][57.4%][r=2212KiB/s,w=2228KiB/s][r=553,w=557 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][59.0%][r=2394KiB/s,w=2482KiB/s][r=598,w=620 IOPS][eta 00m:25s]
Jobs: 1 (f=1): [m(1)][60.7%][r=2300KiB/s,w=2260KiB/s][r=575,w=565 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][62.3%][r=1977KiB/s,w=2174KiB/s][r=494,w=543 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][63.9%][r=2168KiB/s,w=2128KiB/s][r=542,w=532 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][65.6%][r=2297KiB/s,w=2233KiB/s][r=574,w=558 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][67.2%][r=2280KiB/s,w=2332KiB/s][r=570,w=583 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][68.9%][r=2262KiB/s,w=2254KiB/s][r=565,w=563 IOPS][eta 00m:19s]
Jobs: 1 (f=1): [m(1)][70.5%][r=2286KiB/s,w=2330KiB/s][r=571,w=582 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][72.1%][r=2392KiB/s,w=2276KiB/s][r=598,w=569 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][73.8%][r=2176KiB/s,w=2012KiB/s][r=544,w=503 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][75.4%][r=2722KiB/s,w=2438KiB/s][r=680,w=609 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][78.3%][r=2312KiB/s,w=2136KiB/s][r=578,w=534 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][78.7%][r=2278KiB/s,w=2406KiB/s][r=569,w=601 IOPS][eta 00m:13s]
Jobs: 1 (f=1): [m(1)][81.7%][r=2036KiB/s,w=2140KiB/s][r=509,w=535 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][82.0%][r=2122KiB/s,w=2270KiB/s][r=530,w=567 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][83.6%][r=2134KiB/s,w=2234KiB/s][r=533,w=558 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][85.2%][r=2560KiB/s,w=2556KiB/s][r=640,w=639 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][86.9%][r=2254KiB/s,w=2370KiB/s][r=563,w=592 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][88.5%][r=2376KiB/s,w=2312KiB/s][r=594,w=578 IOPS][eta 00m:07s]
Jobs: 1 (f=1): [m(1)][91.7%][r=2456KiB/s,w=2452KiB/s][r=614,w=613 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][91.8%][r=2174KiB/s,w=2306KiB/s][r=543,w=576 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][93.4%][r=2236KiB/s,w=2420KiB/s][r=559,w=605 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][95.1%][r=2550KiB/s,w=2502KiB/s][r=637,w=625 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][96.7%][r=2472KiB/s,w=2360KiB/s][r=618,w=590 IOPS][eta 00m:Jobs: 1 (f=1): [m(1)][100.0%][r=2445KiB/s,w=2601KiB/s][r=611,w=650 IOPS][eta 00m:00s]
Jobs: 1 (f=0): [f(1)][100.0%][r=1853KiB/s,w=1701KiB/s][r=463,w=425 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=32577: Sat Aug 17 18:11:10 2019
  read: IOPS=517, BW=2071KiB/s (2121kB/s)(121MiB/60001msec)
    slat (usec): min=16, max=63539, avg=310.37, stdev=1133.56
    clat (usec): min=2, max=2001, avg= 8.36, stdev=25.91
     lat (usec): min=19, max=63556, avg=321.51, stdev=1146.53
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    4], 20.00th=[    5],
     | 30.00th=[    6], 40.00th=[    7], 50.00th=[    8], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[   10], 90.00th=[   12], 95.00th=[   14],
     | 99.00th=[   26], 99.50th=[   47], 99.90th=[  265], 99.95th=[  363],
     | 99.99th=[ 1467]
   bw (  KiB/s): min=  588, max= 3400, per=100.00%, avg=2074.31, stdev=635.32, samples=119
   iops        : min=  147, max=  850, avg=518.50, stdev=158.80, samples=119
  write: IOPS=517, BW=2072KiB/s (2121kB/s)(121MiB/60001msec); 0 zone resets
    slat (usec): min=44, max=107841, avg=395.93, stdev=1422.29
    clat (usec): min=2, max=1490, avg= 9.53, stdev=22.81
     lat (usec): min=48, max=107856, avg=408.23, stdev=1424.30
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    5], 20.00th=[    6],
     | 30.00th=[    7], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[   10], 80.00th=[   11], 90.00th=[   13], 95.00th=[   15],
     | 99.00th=[   29], 99.50th=[   56], 99.90th=[  262], 99.95th=[  371],
     | 99.99th=[ 1205]
   bw (  KiB/s): min=  721, max= 3256, per=100.00%, avg=2075.40, stdev=612.40, samples=119
   iops        : min=  180, max=  814, avg=518.80, stdev=153.08, samples=119
  lat (usec)   : 4=10.66%, 10=69.44%, 20=18.18%, 50=1.19%, 100=0.23%
  lat (usec)   : 250=0.17%, 500=0.08%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=130, max=1376.4k, avg=1286.65, stdev=8480.13
    sync percentiles (nsec):
     |  1.00th=[   241],  5.00th=[   262], 10.00th=[   314], 20.00th=[   446],
     | 30.00th=[   596], 40.00th=[   812], 50.00th=[  1144], 60.00th=[  1352],
     | 70.00th=[  1544], 80.00th=[  1720], 90.00th=[  2024], 95.00th=[  2352],
     | 99.00th=[  3472], 99.50th=[  4128], 99.90th=[ 17280], 99.95th=[ 40704],
     | 99.99th=[296960]
  cpu          : usr=4.37%, sys=32.10%, ctx=98816, majf=5, minf=17
  IO depths    : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=31064,31076,0,62136 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=2071KiB/s (2121kB/s), 2071KiB/s-2071KiB/s (2121kB/s-2121kB/s), io=121MiB (127MB), run=60001-60001msec
  WRITE: bw=2072KiB/s (2121kB/s), 2072KiB/s-2072KiB/s (2121kB/s-2121kB/s), io=121MiB (127MB), run=60001-60001msec

dazbobaby
Posts: 5
Joined: Tue Jun 30, 2015 9:09 pm
Contact: Website

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Sun Aug 18, 2019 11:58 am

Tell me about it, I've got several Sabrent SATA III to USB 3 adapters and they all suck on the Pi 4.

My SSD speeds were so low that they didn't register (DietPi benchmark). So I ordered a couple of replacements from amazon:
https://smile.amazon.co.uk/gp/product/B07F7WDZGT

My speeds are now at SSD speeds.
DietPi-Benchmark | RPi 4 Model B (armv7l) | IP: 192.168.1.23


│ Filesystem Benchmark Results:

│ - Filepath = /benchmark.file
│ - Test size = 100 MiB
│ - WRITE = 93 MiB/s
│ - READ = 180 MiB/s

│ <Ok>
This is on an old 250GB SSD through USB 3 and obviously no TRIM support. So overall, excellent
My Pi Blog https://the-bionic-cyclist.co.uk/

janforman
Posts: 1
Joined: Mon Aug 19, 2019 12:45 pm
Location: Czech Republic
Contact: Website

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Aug 20, 2019 8:01 am

It's very dependent on firmware in USB/SATA bridge.

For me VL711 and ASM1352R works in UASP mode (but only with stock VL805 firmware in RPi) and can reach aprox. 358MB/s.
Not working with "beta" firmware from VLI presented on this forum.

But when firmware in bridge is outdated then UASP mode is very unstable.
I flashed all my bridges with recommended FW, but it can be risky. One time I must desolder Serial Flash and reflash it outside (my mistake - wrong FW).

Now it's stable and fast for me.

zappor
Posts: 3
Joined: Mon Sep 02, 2019 11:55 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Sep 03, 2019 9:20 pm

janforman wrote:
Tue Aug 20, 2019 8:01 am
It's very dependent on firmware in USB/SATA bridge.

For me VL711 and ASM1352R works in UASP mode (but only with stock VL805 firmware in RPi) and can reach aprox. 358MB/s.
Not working with "beta" firmware from VLI presented on this forum.

But when firmware in bridge is outdated then UASP mode is very unstable.
I flashed all my bridges with recommended FW, but it can be risky. One time I must desolder Serial Flash and reflash it outside (my mistake - wrong FW).

Now it's stable and fast for me.
Same here with this device:

Code: Select all

[    1.537912] usb 2-2: New USB device found, idVendor=174c, idProduct=55aa, bcdDevice= 1.00
[    1.537929] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[    1.537945] usb 2-2: Product: USB3.0 External HDD
[    1.537959] usb 2-2: Manufacturer: ASMedia
[    1.561858] scsi host0: uas
[    1.563320] scsi 0:0:0:0: Direct-Access     Corsair  Force LS SSD     0    PQ: 0 ANSI: 6
It works in UAS mode with vl805_fw_013701.bin but not with vl805_fw_0137a8.bin

NOsen
Posts: 15
Joined: Wed Feb 06, 2013 11:08 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Sun Sep 15, 2019 2:31 pm

Im using a usb3 stick as systemdisk (boot on sdcard)
The quirk Is this something i could apply or will it mess upp the system?

DirkS
Posts: 10018
Joined: Tue Jun 19, 2012 9:46 pm
Location: Essex, UK

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Sun Sep 15, 2019 3:15 pm

NOsen wrote:
Sun Sep 15, 2019 2:31 pm
Im using a usb3 stick as systemdisk (boot on sdcard)
The quirk Is this something i could apply or will it mess upp the system?
Make a backup before trying something like this...

thatchunkylad198966
Posts: 126
Joined: Thu Jul 04, 2019 10:21 am
Location: UK, Birmingham

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Mon Oct 07, 2019 7:48 pm

Thanks for this! Saved me a lot of time looking on Google.
Running very nice off my USB3 to SATA cable and /root on a SSD.

I have another SSD that's got better read/write speeds so I'll be doing this again shortly; when I can be bothered! :D :P
One man's trash is another man's treasure! :) Pi's I have; Pi Zero, Pi Zero W, Pi 2 x2, Pi 3 x2, Pi 4 4GB x2.

User avatar
SyncBerry
Posts: 51
Joined: Sat Sep 21, 2019 11:13 am
Location: France (S-W)

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Mon Oct 07, 2019 10:39 pm

Thank you for this thread.
Latest updated Raspbian Buster Pi4 received by the end of september after summer stock shortage.
My Crucial MX500-500G w latest FW on 2 different SATA bridges (both JMS578, one noName cable from Amazon 5€, the other a bus powered enclosure from a local dealer, branded ConnectLand) prevented Pi4 to warm reboot (shutdown -r) (different genuine FW, but I played a bit flashing to oDroid FW173.1.0.2 with no success) on USB3. Pi4 only cold booted. Being headless I could only compare the time for ping to come back :
USB2 (cold & warm boot: ~30 up to 43s)
USB3 cold boot: 100 up to 250s
2 successive reboot shoots with usb-storage.quirks=152d:0578:u dwc_otg.lpm_enable=0... gave
USB3 warm boot: 35s
sudo hdparm -tT /dev/sda1
[sudo] Mot de passe de pi :
/dev/sda1:
Timing cached reads: 1798 MB in 2.00 seconds = 899.28 MB/sec
Timing buffered disk reads: 256 MB in 0.78 seconds = 327.22 MB/sec
MX series are claimed by Crucial to having embbeded supercaps to prevent data corruption, allowing the controller to do its jobs until there is really no need for power. This may be a(nother ?) reason for bootup to fail when mains come back too soon (this, I don't believe as it would be the same in any PC). Super caps take long to discharge under low current drain, but also there may be another issue when these drives are on USB bridges because then the charge of supercaps may be longer than when a power supply in a usual PC because of USB current limit. I can't figure how the controller being flushing any data to cells or other routine on power off can deal with power coming back and the PC asking him to work again..... please forget this rambling as they work in USB2.

Is it true that TRIM is disabled with this quirk ? Will this wear out my SSD more quickly ?

[EDIT]: when crashed on USB3 after reboot, I think the Pi isn't really dead as I set set the Act_Led_Trigger thing to heartbeat... and it pulsed as usual.

jerrm
Posts: 200
Joined: Wed May 02, 2018 7:35 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 2:06 am

SyncBerry wrote:
Mon Oct 07, 2019 10:39 pm
Is it true that TRIM is disabled with this quirk ? Will this wear out my SSD more quickly ?
Yes it is true - no TRIM support with the usb-storage driver. Shouldn't have impact on overall wear, but the drive can't use the blocks as efficiently. It's a shame because the JMS578 adapters have been solid with UAS and TRIM support that "just works" under PC Linux.

Beware the odroid firmware. it reduces uas driver errors, but a bonnie++ benchmark and at least one "real world" job of mine breaks it.

The only reports I've seen of uas w/trim support adapters that work on the Pi are some startech asmedia based adapters with firmware updates.

User avatar
rpdom
Posts: 15597
Joined: Sun May 06, 2012 5:17 am
Location: Chelmsford, Essex, UK

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 6:42 am

jerrm wrote:
Tue Oct 08, 2019 2:06 am
It's a shame because the JMS578 adapters have been solid with UAS and TRIM support that "just works" under PC Linux.
That's interesting because I have one JMS578 adapter that just won't work at all on my PC, but works with the PI 4B after a firmware upgrade and setting the quirk option. I have others that work fine on the Pi and PC after a firmware upgrade.

jerrm
Posts: 200
Joined: Wed May 02, 2018 7:35 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 1:05 pm

rpdom wrote:
Tue Oct 08, 2019 6:42 am
That's interesting because I have one JMS578 adapter that just won't work at all on my PC, but works with the PI 4B after a firmware upgrade and setting the quirk option. I have others that work fine on the Pi and PC after a firmware upgrade.
I haven't had a problem with any adapter on the Pi once quirks is set to disable UASP. Really surprised you needed both the fw update and to disable UASP. I guess some manufacturers could have some really old fw on the chip.

fanoush
Posts: 491
Joined: Mon Feb 27, 2012 2:37 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 1:25 pm

jerrm wrote:
Tue Oct 08, 2019 2:06 am
Beware the odroid firmware.
Which one you mean? Some specific one from https://wiki.odroid.com/odroid-xu4/soft ... _fw_update ?
There are several versions inside jms578fwupdater.tgz linked there, the v173.01.00.02 is even from 2019.

themikmik
Posts: 3
Joined: Wed Sep 25, 2019 2:28 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 3:33 pm

update if interested, while 2.0 seems to boot root just fine, i finally got this SATA to usb 3.0 to boot. Same issues, only difference today is ran update and still wonky but if i unplug during boot at raspberries before telling me write cache messages i plug back into 3.0 and boots. but read/write speeds SLOW..bottom port, then tested boot on top most 3.0 port..see some speed write increase, mainly in time, but read speeds up to 121MB/s on 7200 SATA HDD. if it helps the top port seems to work better of the two 3.0's. Maybe bottom one sharing another comm?

jerrm
Posts: 200
Joined: Wed May 02, 2018 7:35 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 3:54 pm

fanoush wrote:
Tue Oct 08, 2019 1:25 pm
jerrm wrote:
Tue Oct 08, 2019 2:06 am
Beware the odroid firmware.
Which one you mean? Some specific one from https://wiki.odroid.com/odroid-xu4/soft ... _fw_update ?
There are several versions inside jms578fwupdater.tgz linked there, the v173.01.00.02 is even from 2019.
v173.01.00.02 is what I tested. At first glance it appeared to work, but a bonnie++ benchmark fails in a BAD way, as does my zbackup restore job.

Looks like there has been a newer "standard" fw image posted dated in August - I'll try that once I can get back and plug in the Pi4.

User avatar
rpdom
Posts: 15597
Joined: Sun May 06, 2012 5:17 am
Location: Chelmsford, Essex, UK

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 4:46 pm

jerrm wrote:
Tue Oct 08, 2019 1:05 pm
rpdom wrote:
Tue Oct 08, 2019 6:42 am
That's interesting because I have one JMS578 adapter that just won't work at all on my PC, but works with the PI 4B after a firmware upgrade and setting the quirk option. I have others that work fine on the Pi and PC after a firmware upgrade.
I haven't had a problem with any adapter on the Pi once quirks is set to disable UASP. Really surprised you needed both the fw update and to disable UASP. I guess some manufacturers could have some really old fw on the chip.
Other adaptors worked fine with just the firmware update, so I tried that first. That didn't solve everything, but it did improve it. I the end I resorted to the quirks, which had proven unnecessary on another adaptor.

jerrm
Posts: 200
Joined: Wed May 02, 2018 7:35 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 5:21 pm

rpdom wrote:
Tue Oct 08, 2019 4:46 pm
Other adaptors worked fine with just the firmware update, so I tried that first. That didn't solve everything, but it did improve it. I the end I resorted to the quirks, which had proven unnecessary on another adaptor.
USB has always been a mess on Linux, even more so with the Pi. I've said before, and still believe, RPT needs to find a reliable Pi-compatible adapter with UAS and TRIM support, stamp a logo on it and sell it as official/authorized/supported/etc. Just please make it in black and not that awful raspberry red.

User avatar
SyncBerry
Posts: 51
Joined: Sat Sep 21, 2019 11:13 am
Location: France (S-W)

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Oct 08, 2019 5:48 pm

The JMS/oDroid FWupdater tool features parameters to set -t – Auto spin-down timer which should be discarded by SSD's (unless some interpret this for some low power mode), and also more interesting -u 0|1 enable/disable SATA hotplug function. Assumed we use it for our rootfs SSD, shoundn't we force/set this to 1 (side effect : paste a sticker on it to remember why when plugged elsewhere you won't propose to eject the drive)?
IIUC hdparm can speak to the drive through a recent bridge FW so there may be many other parameters now available. geeks here? ;)
Last edited by SyncBerry on Wed Oct 09, 2019 5:01 pm, edited 1 time in total.

martywise
Posts: 1
Joined: Wed Oct 09, 2019 5:41 am

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Wed Oct 09, 2019 5:51 am

Wow. I was very excited about the various improvements on the Pi4 and picked one up recently to try it out with a "real" disk instead of the SD card. Apparently still some problem putting the boot partition on a USB drive, but that is only a minor inconvenience -- the root filesystem can be relocated.

I picked up an inexpensive mSATA SSD (~60GB) and a USB 3 enclosure and moved by root volume onto it. Initially, the results were disappointing... booting took a couple of minutes and the hdparm buffered disk read stats were in the 600KB/sec range for the USB SSD.

Something was clearly wrong. I found this thread and implemented the quirks change described here and rebooted. The system was back and booted before I could get my finger off the enter key! Now, hdparm shows ~350MB/sec transfer rates compared to 600KB/sec prior to the fix or ~10MB/sec from the SD card.

Thanks!

M. Wise
Gloucester, Virginia, USA

StephenFalken83
Posts: 1
Joined: Thu Oct 31, 2019 2:00 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Thu Oct 31, 2019 2:09 pm

Thank god you guys posted this fix, I was pulling my hair out trying to figure out why my ethereum node couldn't sync. Sure enough the SSD drive bottleneck was the problem.

HankB
Posts: 125
Joined: Fri Jan 01, 2016 2:45 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Nov 05, 2019 1:18 pm

I had posted earlier that "works well" with the TNP adapter. Further experience has proved this wrong. I've been trying to run my Pi 4B from a SATA SSD (booting from an SD card) and it has not been stable, crashing within days. I have run it from an SD card for weeks w/out difficulty. I've added back the work around to disable UASP and that seems not to help. I found it crashed again this morning and find this in /var/log/kern.log

Code: Select all

Nov  4 05:05:12 nova kernel: [128194.297838] brcmfmac: brcmf_cfg80211_scan: scan error (-52)
Nov  4 09:21:12 nova kernel: [143554.779597] brcmfmac: brcmf_run_escan: error (-52)
Nov  4 09:21:12 nova kernel: [143554.779604] brcmfmac: brcmf_cfg80211_scan: scan error (-52)
Nov  4 12:57:12 nova kernel: [156515.177860] brcmfmac: brcmf_run_escan: error (-52)
Nov  4 12:57:12 nova kernel: [156515.177868] brcmfmac: brcmf_cfg80211_scan: scan error (-52)
Nov  4 16:28:12 nova kernel: [169175.572330] brcmfmac: brcmf_run_escan: error (-52)
Nov  4 16:28:12 nova kernel: [169175.572339] brcmfmac: brcmf_cfg80211_scan: scan error (-52)
Nov  4 16:33:12 nova kernel: [169475.568766] brcmfmac: brcmf_run_escan: error (-52)
Nov  4 16:33:12 nova kernel: [169475.568773] brcmfmac: brcmf_cfg80211_scan: scan error (-52)
Nov  4 20:02:42 nova kernel: [182045.398236] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:02:42 nova kernel: [182045.398244] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:02:42 nova kernel: [182045.845109] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:02:42 nova kernel: [182045.845117] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:02:44 nova kernel: [182046.970041] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:02:44 nova kernel: [182046.970060] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:02:44 nova kernel: [182047.441881] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:02:44 nova kernel: [182047.441890] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:08:12 nova kernel: [182375.305692] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:08:12 nova kernel: [182375.305700] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:08:12 nova kernel: [182375.719611] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:08:12 nova kernel: [182375.719620] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:18:12 nova kernel: [182975.351547] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:18:12 nova kernel: [182975.351555] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:18:12 nova kernel: [182975.717493] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:18:12 nova kernel: [182975.717502] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:28:12 nova kernel: [183575.761909] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:28:12 nova kernel: [183575.761919] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:28:13 nova kernel: [183576.272446] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:28:13 nova kernel: [183576.272458] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:38:13 nova kernel: [184175.969506] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:38:13 nova kernel: [184175.969515] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:38:13 nova kernel: [184176.608967] [drm:vc4_bo_create [vc4]] *ERROR* Failed to allocate from CMA:
Nov  4 20:38:13 nova kernel: [184176.608975] [drm]                           dumb:  54072kb BOs (9)
Nov  4 20:47:34 nova kernel: [184737.019769] Unhandled prefetch abort: breakpoint debug exception (0x222) at 0x00000000
Nov  5 06:16:06 nova kernel: [218850.144526] Bluetooth: hci0: sending frame failed (-49)
[email protected]:/mnt/boot/var/log# 
Any ideas what these errors mean or how to track them down and resolve them?

Thanks!

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24165
Joined: Sat Jul 30, 2011 7:41 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Nov 05, 2019 2:19 pm

brcmfmac is the wireless driver, but that vc4_bo_allocation issue is indicating something is running out of memory. Are you fully up to date? (apt update && apt full-upgrade)

You are also getting errors in multiple subsystems, is your power supply adequate?
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I think it’s wrong that only one company makes the game Monopoly.” – Steven Wright

HankB
Posts: 125
Joined: Fri Jan 01, 2016 2:45 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Tue Nov 05, 2019 7:40 pm

jamesh wrote:
Tue Nov 05, 2019 2:19 pm
brcmfmac is the wireless driver, but that vc4_bo_allocation issue is indicating something is running out of memory. Are you fully up to date? (apt update && apt full-upgrade)

You are also getting errors in multiple subsystems, is your power supply adequate?
Fully up to date. I run `apt update` and `apt upgrade` several times/week.

Power supply is the official RPi USB-C supply. I power it through a smart switch that can report power usage. At idle it varies between 5.5-6 watts. If I run a compute benchmark (sysbench --test=cpu --cpu-max-prime=25000 --num-threads=4 run) ot goes up to about 8 watts. If I add a disk benchmark (fio) it goes up to about ten. The SSD is a Crucial M4 256GB 2.5" SATA unit. I couldn't find power usage on the Micron specs but one review reported up to 2.9W. I have it powered through the USB-3 connector on the Pi 4B (4GB unit.) I have never seen the 'lightening bold' that indicates low power while using this power supply.

As for the possible GPU involvement... Just earlier today I was looking for the setting for how long before blanking the screen and ran across the graphical equivalent for raspi-config. It showed 76MB allocated to the GPU. raspi-config shows 64MB. Neither seems right to me because I don't recall changing that. (I frequently do so with Pi Zeroes which I usually run headless but not with the 1GB and larger Pis.) I have a keyboard and Ligitech receiver on the USB-2 ports. One video output goes to a 1600x1200 LCD monitor.

My cmdline.txt is

Code: Select all

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=98408e28-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles usb-storage.quirks=357d:7788:u
Thanks!

edit.0: I just rebooted using the SD card on root environment (different SD card but probably the one I copied to the SSD) I found
* the graphical config program reports 76MB allocated to the GPU.
* raspi-config reports 64GB allocated to GPU. I changed it to 256 and now the GUI settings program also reports 256 GB. Seems like there is a bug there.

edit.1: I have made two changes. First, I switched to a different USB/SATA adapter. This one requires the UASP defeat work around to work at all. Second, I adjusted the split of RAM dedicated to the GPU to 256MB. This was a couple days ago. If it runs for a week or two I will consider the problem solved. I can't rule out the possibility that an update at about the same time could have made a difference as well. If the system remains stable, I will try swapping to the other USB/SATA adapter to see if that was the problem.

edit.2:

Code: Select all

[email protected]:~ $ uptime
 08:51:54 up 7 days, 18:14,  2 users,  load average: 1.36, 0.72, 0.36
[email protected]:~ $ 
I'm going to declare victory on the stability front. I made two changes to get to this point. (video RAM allocation and USB/SATA adapter.) I'm going to swap back to the other SATA/USB adapter and see if the stability problem returns.
Last edited by HankB on Wed Nov 13, 2019 2:54 pm, edited 1 time in total.

mihalis68
Posts: 2
Joined: Fri Nov 08, 2019 5:32 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Fri Nov 08, 2019 5:37 pm

I was suffering this problem too. I've got a 1TB Samsung EVO 970 NVME and it wasn't working in a couple of different NVME-USB adapters. I was going to configure the quirks setting, but then I had a thought, I swapped out the power supply I got in the raspberry pi 4 canakit for an Asus power adapter for a Chromebook. PROBLEM SOLVED.

Whereas before I would get disconnects after just a few files in an rsync copy of my root drive now it just copied an entire ubuntu 19 install flawlessly. The power adapter is this one : https://www.amazon.com/gp/product/B07G5 ... Q54QEZ1421 so provides a solid 3 amps.

So I suspect that some of the problems in this thread really are bad power supply (as mentioned in this thread actually).

Chris Morgan

mihalis68
Posts: 2
Joined: Fri Nov 08, 2019 5:32 pm

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Fri Nov 08, 2019 9:20 pm

It may be bad form to follow-up to ones own post, but even with a 3A power supply, I do eventually see the USB disconnects, so it's not the entire solution after all. It is much better with the better power supply, but still not stable, unfortunately.

Chris Morgan

frareinif
Posts: 2
Joined: Fri Nov 22, 2019 10:19 am

Re: STICKY: If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, read this

Fri Nov 22, 2019 10:42 am

I have that problem with a HDD (WD My Book/USB 3) after I switched from 3b to 4. Should it work for me too?

Edit: modifying /boot/cmdline.txt did not help. But a power supplied USB hub did... :)
Last edited by frareinif on Thu Nov 28, 2019 9:43 pm, edited 1 time in total.

Return to “Troubleshooting”