Are you asking this:
There is an NAS serving files to multiple clients concurrently. However,
1 - Why some clients get more bandwidth, almost all, while the others only get a straw to drain?
2 - Why slow clients won't speed up, even after those faster finished?
My GUESS is that, you need a better NAS machine.
Long story to short: Try setting a Pi as the NAS, and use the Fair Queue queuing discipline on it. The problems might be solved.
About question-1, when the NAS machine's queuing discipline is First in First out, then it could happen.
The word queuing discipline is a Linux terminology, and can be short as qdisc. BSDs call it ALTQ, or ALTernate Queueing.
It's the rule, deciding when multiple packets want to get out of the same network interface, who goes first.
The most simple rule, is First in First out. So the packets comes earlier would have a priority to use the interface first. However, the problem is that, some clients could send their TCP ACK packet faster, to make the NAS send them data faster. So such clients could occupy the interface almost completely. This is not because these clients are bad, just because they start the transfer earlier, so their TCP transfer window get grown to a bigger one.
To solve question-1, the NAS need to have the capability to run a more advanced queuing discipline. In Linux, there is a qdisc called fq, aka Fair Queue.
When facing multiple packets, this qdisc would arrange them into different queues, by default each TCP session, aka SOCKET, have a queue. And then dequeue these queues in a round robin fashion, so none of them would beat the others away.
Documentation could be seen with "man tc-fq"
About question-2, I think it's because the NAS's TCP congestion control algorithm is not trying to test the path size after some time.
There is a part in TCP, also trying to make the network works more fair. It is the TCP congestion control.
Note that, this one is about TCP, the layer 4, end to end fairness. While the qdisc is at layer 2, the single link fairness.
This TCP congestion control should work like this: when the transfer starts, it walks slow. And, trying to raise its speed. Until to a point, it finds out the path reached its capability. So it keep this speed. After a timeout, it could try the speed test again, to make sure if the path condition is changing.
My guess is that your NAS's TCP congestion control is not doing this timeout thing. It won't re test the link condition. If this is the case, then it's not normal, it's a faulty TCP implementation.
To solve all these two problems. I think you could try setting up a Pi as the NAS.
Then you could check your queuing discipline with the command "ip link", see the qdisc part.
You could change the qdisc to fq with "sudo tc qdisc add dev eth0 root fq"
By default, Raspbian uses the CUBIC TCP congestion control algorithm. It's already a very good one, Linux use it, Windows 10 use it either.
You could see it with " cat /proc/sys/net/ipv4/tcp_congestion_control"
I'm not going to make things more difficult, as it's already very difficult. My university didn't teach me these.
So I think we don't need to change Pi's congestion control.
It's a 144 mega bytes file after all, I think a Pi NAS would be sufficient.