How would a DOM-less,statically typed, ahead-of-time-compiled javascript code compare to native code performance-wise? -


the traditional answer "why javascript slower native code?" is: "because it's interpreted". problem claim interpretation not quality of language itself. matter of fact, nowadays javascript code being jited, still, isn't close native speed.

what if remove interpretation factor equation , make javascript aot compiled? match performance of native code? if yes, why isn't done on web*? if no, performance bottleneck now?

if new bottleneck dom, if eliminate too? dom-less, compiled javascript efficient native code? if yes, why isn't done on web**? if no, performance bottleneck now?

after stripping dom part , interpretation part, big difference can see between javascript , c/c++ fact former has dynamic types. suppose eliminate , end dom-less,statically typed, ahead-of-time-compiled javascipt. how compare native code? if efficient, why isn't used? if not, bottleneck now? in state, javascript identical c.

*one might jit faster load, wouldn't explain why aot isn't being used resource-intensive web apps such 3d video games, aot performance benefit worth initial aot compilation delay. (and significant "game loading" delay present anyways)

**a dom-less javascript use webgl/canvas interface user. requires minimal dom, defines initial html5 canvas, can theoretically eliminated revising technology if it's worth performance benefit. assume dom-less webgl/canvas possible when answering.

edit: talking client-side compilation.

important:
seem advocate stripped, statically typed compilable version of js. first thing shows have no clue js is: multi-paradigm programming language, supports prototype-based oo, imperative , functional programming paradigms. key being functional paradigm. apart haskell, can sort-of strong typed after you've defined own infix operators, functional language can't statically-typed afaik. imagine c-like function definitions return closures:

function = (function (object g) {     char[] closurechar = g.location.href;     object foo = {};     function foo.bar = char* function()     {//this right mess         return &closurechar;     }; }(this)); 

a function first class object, too. using tons of lamda-functions return objects, reference functions might return itself, other functions, objects or primitives... how on earth going write that? js functions way of creating scopes, structuring code, controlling flow of program things assign variables.

the problem compiling js ahead of time quite simple: you compile code, have run on such vast array of different platforms: desktops/laptops running windows, osx, linux, unix tablets , smartphones different mobile browsers...
if did manage write & compile js, runs on platforms, speed of js still limited being single threaded, , running on js engine (like java runs on vm).

compiling code client side being done. true, takes time, not awful lot. it's quite resource intensive, modern browsers cache code in such way lot of preprocessing has been done already. things possible compile, cached in compiled state, too. v8 open source, fast, js engine. if want, can check source on how it's determined aspects of js code compiled, , aren't.
so, that's how v8 works... js engines have more how fast code runs: fast, others aren't. faster @ 1 thing, others outperform competition on area. more details can read here

stripping dom part, isn't stripping language. dom api, isn't part of js itself. js expressive, in core, small language, c. both haven't got io capabilities left own devices, nor can parse dom. that, browser implementations of js have access domparser object.
suggest minimal dom... hey, sense revamped dom api. it's far best thing web. have realize dom , js separate entities. dom (and dom api) managed w3, whereas ecma responsable js. neither having each other. that's why dom can't "stripped" js: never part of begin with.

since compare js c++: can write c++ code can compiled on both windows , linux machines, that's not easy sounds. since refer c++ yourself, think might know that, too.
speaking of which, if real difference see between c++ , js static vs dynamic typing, should spend bit more time learning js.

while syntax c-like, language shares lot more resemblances lisp (ie functional programming). doesn't know of classes such, uses prototypes... dynamic typing not big of deal, honest.

so, bottom line:
compiling js run on every machine lead ms's .net framework. philosophy behind was: "write once, run everywhere"... didn't turn out true @ all.
java is x-platform, that's because it's not compiled native code, runs on virtual machine.
lastly, ecmascript standard (js being common implementation) not good, , result of joint effort of big competitors in field: mozilla, google, microsoft , irrelevant swiss company. it's 1 huge compromise. imagine 3 big names agreeing make compiler js together. microsoft put forth jscript compiler best, google have own ideas , mozilla have 3 different compilers ready, depending on community wants.

edit:
made edit, clarifying you're talking client-side js. because felt need specify that, feel though you're not entirely sure js ends, , browsers takes over.
js designed portable language: hasn't got io capabilities, supports multiple development paradigms, , (initially) interpreted language. true, developed web in mind, could, , do, use language query database (mongodb), alternative batch scripting language (jscript), or server-side scripting language (backbone, node.js,...). use ecmascript (the basic standard js) make own programming language (yes, i'm talking flash actionscript).

depending on use-case, js given access objects/api's aren't native language (document, [object http].createserver, [object file].readfilesync dom access, webserver capabilities, , io respectively). form bottlenecks, not language itself.

as hinted ad js initially interpreted language. way these days, division bell between compiled , interpreted languages has been fading past decade, honest.
c/c++ used strictly compiled languages, in cases (.net) c++ code needn't compiled machine code anymore...
@ same time, scripting languages python, used many purposes they're perceived programming language, term scripting language somehow implies "lesser language".
few years ago, release of php5, zendengine2 released, too. since then, php compiled bytecode , runs on virtual machine. can cache bytecode using apc. bcompiler allows generate standalone executables php code, facebook's hphpc (deprecated) used compile php c++, native code. now, facebook uses hhvm, custom virtual machine. find out more here.

the same evolution can seen in javascript interpreters (which called engines nowadays). they're not everyday parse-and-execute threads of old, still seem think are. there's lot of wizardry going on in terms of memory management, jitcompilation (tail stack optimizing even), optimization , have you...
great things, these make rather hard determine actual bottlenecks are. way each engine optimizes differs more ie6 differs ie10, it's next impossible pinpoint bottlenecks definitively. if 1 browser takes 10 seconds dom intensive task, might take 1~2 seconds. if, however, same browsers pitted against each other check performance of regexp object, boot might on other foot.
let's not forget that, after you've written blog-post findings, you'll have check if neither of browsers has released new version/update claims speed tasks.


Comments

Popular posts from this blog

Perl - how to grep a block of text from a file -

delphi - How to remove all the grips on a coolbar if I have several coolbands? -

javascript - Animating array of divs; only the final element is modified -