Epiphany performs well but I'm hoping to squeeze some more speed out of it. Would upgrading to Raspbian Jessie:
- (a) make no difference?
(b) speed things up?
(c) break everything?
I think that since people have heard that Jessie tends to boot faster (due to using systemd and assuming you can get systemd to work properly) they assume the improvement extends after the boot process and don't understand that once everything is up and running it still runs at about the same speed.gkreidl wrote:Epiphany has still to be updated / fixed for Raspbian Jessie. You'll have to wait until it is ready. It won't get any faster, though - how should it?
Are you using any JavaScript frameworks, like jquerry? If so you can improve the speed a bit by not using any frameworks. You might also want to experiment with a USB install and mild over clocking. I think my rpi model b is over clocked to 850 and no over voltage. Also, gpu is at 350. Be careful if you try over clocking.jweob wrote:Thanks for the replies, I will stay with Wheezy. I don't have a good explanation for why I thought it might be faster with Jessie other than newer should be better? Or maybe Jessie just sounds faster than Wheezy. I have a bunch of RPi B+ s which is why I'm trying to find a way to use them rather than an RPi2. The loading time for the page doesn't matter because all the interactivity is on one page controlled with javascript. It's more that it is sluggish when reacting to button presses, I will see what I can do to optimise my code instead.
I have tried kweb in the past. I can't remember why I switched to epiphany, I need to revisit it and see what improvement I get.
jQuery may be ugly but it is a whole lot easier than writing all of it's functions in your own code. Anything running in a web browser will be dependent on the webkit3 engine as that's where the performance will be gained. If the engine can't run hardware accelerated then the RPi isn't the right platform.Heater wrote: At the end of the day I don not believe jQuery is the problem. I hate jQuery with a passion but I'm sure it's quite efficient.
Code: Select all
function FaceOff () {
console.time('JavaScript');
for (i = 0; i < 10000; i++) {
document.getElementById('my-id')
}
console.timeEnd('JavaScript');
console.time('jQuery');
for (i = 0; i < 10000; i++) {
$('#my-id')
}
console.timeEnd('jQuery')
}
Code: Select all
FaceOff()
display.js:27 JavaScript: 1.597ms
display.js:32 jQuery: 11.527ms
FaceOff()
display.js:27 JavaScript: 1.264ms
display.js:32 jQuery: 9.974ms
FaceOff()
display.js:27 JavaScript: 2.692ms
display.js:32 jQuery: 17.662ms
Code: Select all
function FaceOff () {
console.time('JavaScript1');
for (i = 0; i < 10000; i++) {
document.getElementById('my-id')
}
console.timeEnd('JavaScript1');
console.time('jQuery1');
for (i = 0; i < 10000; i++) {
$('#my-id')
}
console.timeEnd('jQuery1')
console.time('JavaScript2');
for (i = 0; i < 10000; i++) {
document.getElementById('my-id')
}
console.timeEnd('JavaScript2');
console.time('jQuery2');
for (i = 0; i < 10000; i++) {
$('#my-id')
}
console.timeEnd('jQuery2')
}
I doubt anyone uses all of JQuerys fuctions , and the most common uses are AFAIR fairly simple just to wrap in a function ...DougieLawson wrote:jQuery may be ugly but it is a whole lot easier than writing all of it's functions in your own code.Heater wrote: At the end of the day I don not believe jQuery is the problem. I hate jQuery with a passion but I'm sure it's quite efficient.
React makes putting together complex GUIs in web apps very easy. It enables and encourages modular creation of modular components. It is not a "framework" that takes over your whole project, you can apply it to just small parts if you like. React is fast. Here is a little experiment with react and svg I put together https://github.com/ZiCog/propanel/tree/master/src
Code: Select all
FaceOff()
display.js:27 JavaScript1: 1.294ms
display.js:32 jQuery1: 10.093ms
display.js:38 JavaScript2: 1.325ms
display.js:43 jQuery2: 6.337ms
FaceOff()
display.js:27 JavaScript1: 2.889ms
display.js:32 jQuery1: 16.437ms
display.js:38 JavaScript2: 2.777ms
display.js:43 jQuery2: 5.482ms
FaceOff()
display.js:27 JavaScript1: 1.282ms
display.js:32 jQuery1: 4.165ms
display.js:38 JavaScript2: 1.985ms
display.js:43 jQuery2: 3.977ms
JQuery is going to run on the client computer, not the server, but that's extra code in the file that needs to be initially sent. Any performance issues after it downloads will be dependent on the client computer. You can also set client computers to cache content to help with this. I'm sure jQuery is as efficient as any other framework available out there, but when you create a library to handle every possible situation like this, it's not efficient as if you just included the code that you were actually using. jQuery can be very useful, however, if you're looking for a little extra website efficiency, you might see improvements in writing without it.DougieLawson wrote:There's clearly quite a large overhead in getting jQuery running for the first jQuery function in the code. The second invocation runs quite signifcantly faster. So it does look like you may be able to code around jQuery's overhead.
I discussed this with the instructor while taking a Javascript class. I'd imagine that there are some things that run just as fast using jQuery but some things that do have a big performance hit.jweob wrote:Changing jQuery to javascript on the click handlers didn't make any improvement. I got much more improvement later by optimising my code to touch the DOM as little as possible, but this does show that you can get some significant speed improvement from using native javascript.
It's been quite a while since many interpreters actually read source for each step; most nominally interpreted systems tokenise in some fashion, ranging from trivial keyword tokenisation in many BASICs to full compilation to a sophisticated portable pseudo-code in Smalltalk and most javas etc. The more modern systems (such as Smalltalk since 1984, java since late 90's etc) then translate/compile the pseudo-instructions at runtime for the particular device they're running on. If that takes more than a blink of an eye then it isn't doing the job right. Typically when running a large benchmark suite in Squeak Smalltalk on a Pi the dynamic compilation takes <1% of total time.Heater wrote:However, modern JavaScript engines don't just interpret your source code, they use a JIT (Just In time Compiler). That means the first execution of anything may well be slower than subsequent executions. The first time through code is compiled to bits of native executable code. Subsequently it only need run that code, much faster. Typically one would want to run any such benchmark over many iterations and do that more than once.
That's much more likely to be the startup cost. A common problem is too much initialisation being done when it may not ever be needed. Lazy initialisation is often better for fast startup. It may cost a little more for long running jobs though, so you pays your cycles and makes your choice.Heater wrote:We could also imagine that the first use of JQuery causes it to create objects and set things up which does not need to repeat subsequently. Thus subsequent runs of a timing test may well be faster.
Ooh, now if code is being simply thrown away then you've got the translator wrong. A good system has inline caching, polymorphic inline caching, open caching and can swap from one to the other without throwing ncode away. Really good systems can add class-sensitive runtime optimisation, meta-feedback and probably chocolate.Heater wrote:(Aside: That JIT optimization can be undone during run time. If a data type changes, from number to string say, at run time then the generated native code has to be thrown away and it falls back to slow interpreting)