Welcome!

Join our community of MMO enthusiasts and game developers! By registering, you'll gain access to discussions on the latest developments in MMO server files and collaborate with like-minded individuals. Join us today and unlock the potential of MMO server development!

Join Today!

Programming language?

Watching from above
Legend
Joined
Apr 9, 2004
Messages
3,828
Reaction score
752
Then you're doing it wrong. Coffeescript is horrible.
Is not.
The vast majority of load-heavy applications are read-intensive, so caching as close to the user as possible tends to scale better than anything else (hence why I refuse to use CDNs that don't have permanent caching for versioned files, because it's throwing away all layers of caching between the server and the client WRT latency, which is THE MOST IMPORTANT THING IN MODERN WEB APPLICATIONS).
Which has nothing to do with the type of software I use node.js with.
 
hello
Loyal Member
Joined
Jun 24, 2004
Messages
726
Reaction score
158
I'm not sure why do you hate on cdn ? Cdn is used for static content, cnds are usually nodes that acts only as storage, some fast responsive web servers with no application on it, just few headers for caching returned with img or css.

I think cdn is a good solution, unloads your application and db servers from processing additional requests while at same time allows browsers to perform multiple parallel requests which in effect speeds up loading time
 
Ginger by design.
Loyal Member
Joined
Feb 15, 2007
Messages
2,340
Reaction score
653

I saw it show up for a brief moment in some fringe blogs and a few HN posts but after that it was widely dismissed by community leaders and most startups (where the supposed benefits of Coffeescript would be most readily accepted) as a pretty zero-value-add idea. To this date, I've yet to see a relevant company looking for Coffeescript engineers.

For reasons such as those highlighted here and many others:

Readability is extremely important in code maintenance, and you spend the vast majority of your time reading code, not writing it, so this is a really, really big deal.

Which has nothing to do with the type of software I use node.js with.

These applications aren't conducive to load except in extremely fringe cases. The write-through load that Twitter or Facebook experience is orders of magnitude lower than the read-through load of your average moderately popular news site. For instance, craigslist ran the majority of its life on Perl and MySQL. That's one of the most latent scripting languages with a database with horrible write performance. Scaling to writes is a problem only about 5 companies in the world have actually needed to tackle. That leaves almost everything else: in which case caching is a definite and well understood solution.

The other difference is applications that do an extreme amount of CPU-intensive work, but that's far from your average web application and more in the realm of distributed computing.

I'm not sure why do you hate on cdn ? Cdn is used for static content, cnds are usually nodes that acts only as storage, some fast responsive web servers with no application on it, just few headers for caching returned with img or css.

I think cdn is a good solution, unloads your application and db servers from processing additional requests while at same time allows browsers to perform multiple parallel requests which in effect speeds up loading time

Because static content, especially versioned static content, can be permanently cached meaning your browser stores it locally on disk (or in memory) and doesn't need to ask the remote server if the cache is stale, because it knows it will never be, or not for a very, very long time. Please refer to this chart to understand why a network request at all is terrible by comparison:
crazyhunter36 - Programming language? - RaGEZONE Forums


The problem is that a lot of bad CDNs (like Google for JS stuff) don't use permanent caching because they want to get usage metrics and potentially track users. You can't know if someone refreshes a page if every asset is cached locally, even though the page would load 100x+ faster.
 
Last edited:
hello
Loyal Member
Joined
Jun 24, 2004
Messages
726
Reaction score
158
jMerliN said:
Because static content, especially versioned static content, can be permanently cached meaning your browser stores it locally on disk (or in memory) and doesn't need to ask the remote server if the cache is stale, because it knows it will never be, or not for a very, very long time. Please refer to this chart to understand why a network request at all is terrible by comparison

Wait, wait, wait. I'm not suggesting using external CDN's I mean like including jquery from Google servers, I'm talking about when you design big scalable application from scratch, when you create the architecture on all fronts including the network, I was talking about your very own CDN node, which you can configure so it will cache the static content pernamently on your PC the very same way you are describing it. Your argument is kinda missed in that case.

As said I was talking about example like:
your app is reachable via: example.com
and your img, css and stuff is reachable via: staticmple.com

By doing that you will unload a node ( or nodes if using load balancer ) from processing static content requests, you still configure the cache for them the same you want and most modern browser allow up to 4-8 parallel downloads of static content so splitting it among different domains allows your browser to load the content faster - the only slower point is DNS resolving, but also even DNS lookups are cacheed in your OS/Browser.

Also going further based on your two previous posts I've a feeling that the 'cache' you are talking about is only limited to content, what Negata and I were talking was caching DB Architecture, Models, pre-compiled code ( if applicable ), cache app settings, localization and stuff like that - it has nothing to do with frond end application as well as nothing to do with cdn.

I was writing about combining Nginx + Memcached + Apache + Application Cache in context of backend, so the backend code will as much cached, pre-compiled as possible so there will as less PHP processing and MySQL queries as possible - that is we treat PHP and MySQL as potential choke points, as the server resources are the most valuable.
 
Watching from above
Legend
Joined
Apr 9, 2004
Messages
3,828
Reaction score
752
I saw it show up for a brief moment in some fringe blogs and a few HN posts but after that it was widely dismissed by community leaders and most startups (where the supposed benefits of Coffeescript would be most readily accepted) as a pretty zero-value-add idea. To this date, I've yet to see a relevant company looking for Coffeescript engineers.

For reasons such as those highlighted here and many others:
 
Ginger by design.
Loyal Member
Joined
Feb 15, 2007
Messages
2,340
Reaction score
653
Wait, wait, wait. I'm not suggesting using external CDN's I mean like including jquery from Google servers, I'm talking about when you design big scalable application from scratch, when you create the architecture on all fronts including the network, I was talking about your very own CDN node, which you can configure so it will cache the static content pernamently on your PC the very same way you are describing it. Your argument is kinda missed in that case.

As said I was talking about example like:
your app is reachable via: example.com
and your img, css and stuff is reachable via: staticmple.com

By doing that you will unload a node ( or nodes if using load balancer ) from processing static content requests, you still configure the cache for them the same you want and most modern browser allow up to 4-8 parallel downloads of static content so splitting it among different domains allows your browser to load the content faster - the only slower point is DNS resolving, but also even DNS lookups are cacheed in your OS/Browser.

It isn't. A dedicated internal CDN is an architectural decision driven by motives that aren't feasibly resolved by caching, not to provide a caching endpoint. There's a reason Facebook has 200+ servers dedicated to serving profile images, but 1 to serving static app content. A user might consume 1000s of unique profile images in a single visit but will only need the static resources once. Caching doesn't help the former case, but completely eliminates all network load in the latter, which means you just throw it up behind a webserver with caching headers set and you don't ever think about it again.

External CDNs are, in an architectural decision, the exact same as creating a dedicated endpoint for serving static content, it's just that someone else is hosting and paying for the bandwidth. The downside is you have to trust them to not be evil, and in Google's case, they just don't pass muster.

And the subdomain splits have nothing to do with speeding up parallel downloads since the caps aren't imposed per-domain. It has everything to do with load balancing because caching is ineffective.

Also going further based on your two previous posts I've a feeling that the 'cache' you are talking about is only limited to content, what Negata and I were talking was caching DB Architecture, Models, pre-compiled code ( if applicable ), cache app settings, localization and stuff like that - it has nothing to do with frond end application as well as nothing to do with cdn.

They're all the same thing. A cache is a cache is a cache. If a GET to /users/ performs a DB query "SELECT <fields> FROM users" which is running some PHP code that is then interpreted, the best possible solution is to cache as close to the user as possible (which is what I said). A cache header on that response (if it can be expected to remain valid for say 10 minutes) will cause it to be cached in the browser or in memory, making it as close to the end-user's CPU as possible minimizing latency and preventing re-execution of the script interpretation or the DB query. If it can't be expected to remain valid, then you can't enforce cache consistency at the browser, so you ask the browser to verify that the cached response isn't stale, which is where db-level write triggers can be useful in very quickly testing if a modification has occurred. If a script needs to run to retrieve an answer (and if the cache is invalid, to re-run the query), caching the script's interpretation instead of re-interpreting it every execution is as close to the client as possible, if the script can maintain state in some efficient way, it can even determine if a modification of the table has occurred without touching the database (say if all writes go through that script). If that's not possible (too many API endpoints, the database is the only central authority), then you have to cache at the db layer. Caching is caching is caching. My point is that the only notable scale in almost every web application ever made has been fetch performance of non-realtime information, which makes caching at the browser or HTTPD level the most important consideration for almost every web application.

I was writing about combining Nginx + Memcached + Apache + Application Cache in context of backend, so the backend code will as much cached, pre-compiled as possible so there will as less PHP processing and MySQL queries as possible - that is we treat PHP and MySQL as potential choke points, as the server resources are the most valuable.

That doesn't resolve latency with writes and realtime critical applications. And even though these aren't the norm, no caching is going to solve that problem except extremely smart application logic that can predict eventual consistency (such as a single node handling all writes to a given table/collection so it knows dirty status before writes make it to disk or to read-only slaves / shards). That's algorithmic caching, not something you can just fire up a library to handle (like memcached or any opcode cache). For instance, if the query is a subset of a table, but that query is dynamic, a cache sitting in front of a database won't be smart enough to reasonably be expected to guess dirty state of a query, but an application layer that knows how the queries will be constructed might.
 
Junior Spellweaver
Joined
Nov 5, 2012
Messages
191
Reaction score
17
Mainly Javascript. But you could do it in many ways.
 
Back
Top