I did tell you its a huge %
But I've done a memory dump of the gs before and it mostly contained character data of online people which were loaded into the map from the database.
I still can't load all the maps with that file from 343. It gets to 32 and my CPU locks up freezing the entire system. (I'm also getting a lot of "err : info : gdelivery::connect to GAuth server failed. Reconnecting..." messages.)
Running with the normal ./start.sh command, I am able to now load 6 maps compared to my previous 3. That's a huge improvement. Now I can leave the Cube and OHT up all the time.
(N.B. I'm not trying to start an argument, but rather trying to make sure that there's clarity.Linux MM/VM is confusing, even if you think you know how the code works; it's been redesigned enough times over the years that it's almost like looking at unfamiliar code each time if you only look at it annually or less...)
Due to the way that virtual memory works in Linux, this is (mostly) untrue, especially with certain tasks such as large Java programs (e.g. enterprisey J2EE stuff) or certain types of databases (which is getting WAY off-topic for RZ, but *coughcough* Oracle).
However, for the subject of this discussion (PW private servers), I concur that 1:1-1.5:1 is the target to aim for due to the evil OOM.That being said, if a system is (ab)using a lot of shmfs/tmpfs space, I could see an argument for going as high as 2:1. However, I'd probably want to find out why there was heavy utilization there and move the offender to a separate machine.
I don't want to get into the way that the Linux kernel manages things like LRUs with VM, but basically, Linux handles free memory differently based upon how much total (phys+swap) memory is available. For example, there's a magic number around 10GB where it will suddenly try to allocate memory differently. Many people with 4GB RAM will find that around 6GB swap will "work better" than 4GB for tasks which use memory in bursts, due to this fact.
Of course, everyone should be using a 64-bit kernel/OS for things like private servers. If there haven't been enough arguments in favor of it already, open your Linux kernel source tree, go to Documentation/vm, and scroll to the bottom of highmem.txt (you can safely ignore the technical details above the last paragraph):
Spoiler:
In this link, you mention dropping your swap to less than your physical RAM. Basically, you never want to drop below the amount of installed physical memory that is exposed to the operating system. If you're using a server with hot-spare or memory cloning/mirroring, it's a good rule of thumb to consider the amount of installed memory as eventually being accessible to the OS, however.
As I know that many of the people here are fans of Ubuntu, I'll quote their FAQ:
Ubuntu Swap FAQ
Spoiler:
All this being said, while I'm on the subject of Linux VM, has anyone tried to see if KSM helps the memory footprint? I've been kind of afraid to try, but I have a testing box here I can turn on memory cloning on to drop its available RAM in half if no one else is game...
Also on this subject, perhaps some enterprising soul might want to investigate other tunables to make running out of RAM suck a bit less. Playing with things like the OOM and overcommit policies seem like good starts, and I've not seen much (any) discussion of these on RZ.
(Disclaimer: I've been doing this kind of stuff with Linux since 1995 [and "bigger machines" before that], about the time when kernel 2.0 came out. So if I managed to slip in a historical artifact that hasn't been true for some years, I apologize.)
Last edited by lewellyn; 14-12-11 at 11:36 PM. Reason: Why do you always seem to find thinkos right after hitting Post? :/
IDK where you got your mis-information from -- Why the hell wouldn't you want your swap to be lower than physical... If you're running a really high end server with 2 TB of ram there is NO WAY you would want to have 2 TB of swap too -- that's just asinine!
hibernation
So you can physically cut power to the machine and go back right to where you were. That is kinda why 2:1 is advocated, but beyond that is insane.
I'll stick with the info I have obtained of
NO MORE than 1:1
I will alsoThat's the point, people and 'industry' relied more on swap because of the ultimate high cost of ram. Now ram is so cheap there is NO reason to go above 1:1 (to me).(although this "recommendation" dates back from a time when physical RAM was very expensive
Additionally like I said, yes, in very large amounts you do not even want 1:1 (that's just ridiculous)
add:
put it this way
if I had a machine that had 2 TB of PHYSICAL ram, I would NEVER run 2 TB of swap. The correct amount of swap for having 2 TB of physical ram would be 500 GB of swap
not that I could even imagine utilizing 2TB of ram period, let alone the 500 GB swap beyond that O.O
Even the idea of a (very brief) moment of downtime doesn't mean anything to me, I couldn't see justifying 2:1 just so *maybe* you could keep a machine/server running seamlessly, and I think the world has become too obsessed with that. So what if a machine needs to come down 10 minutes for a ram swap -- really, who cares!
I guess my overall point is:
If you buy enough ram (especially being cheap enough now-a-days) you technically wouldn't even need swap really! Like that 2 TB 'dream' I would never use it all, so in theory if I did have a machine with 2 TB of physical ram I actually wouldn't even need swap at all![]()
Regardless, the points made in that kernel development thread are just the tips of the icebergs now. In the intervening half decade or so, we now have even more parts of the kernel relying upon adequate swap space. As I posted, two of the most overlooked (and yet most hated) are the overcommitter and OOM killer.
Especially in the context of running a PW private server, those need to be kept in mind, as I'm willing to bet most people here don't even realize that Linux is being lazy about providing the processes the memory they're asking for and won't initialize their swap space till it "needs to".
If you have insufficient swap, it'll try to swap a bunch of stuff out at once and fail, as soon as there's a large memory request that it's overcommitted. Then, it'll kill something to free up some RAM. It happens so fast that it just looks like "it died".
Pretty much, please don't assume how Linux manages memory if you've not hacked on that part of the kernel. (Whether for some masochistic definition of "fun" or for probably not-enough pay.)
Last edited by 343; 15-12-11 at 12:27 AM. Reason: No forum didn't fail; my bad eyes did...
That is technically a good thing tbh... forcing everything to stay in RAM as long as possible because of the slow as all hell disks.
And 343, you've obviously never seen HP Nonstop machines or even some made by IBM that are literally designed to never shut down, there is at least 2 of every single component in the system so that you can upgrade and change things while it is running. Heck you could give the OS a whole new motherboard CPU and RAM upgrade without shutting it off or telling it you are about to do so.
Er, not quite how overcommitting works... It's more like this:
App: Hey, I need a couple gigs of RAM.
Kernel: Huh? OK, here's the address.
* Kernel allocates a bit of that gig and the app uses 64MB it, since the kernel knows most things don't use most of their requests.
* App suddenly needs the rest of the gig and tries to use it.
Kernel: Hold on a sec, I'm swapping.
Ideally, a fraction of a second later, the app can use the rest of the overcommitted memory space. However, if there isn't enough swap, the kernel takes steps to free up RAM. Usually by killing things, sometimes even the process requesting to use all the memory that the kernel told it that it could have!
Basically, overcommitted memory is memory that is promised to apps but isn't actually being used by anything. And as the comment I quoted states, its default is to be somewhat wary of wild requests, but still promise more than it can fulfill. The entire point is to use less of the swap, speeding up your system.
It comes down to this: Swap is slow, but it can be a necessary evil. Keep enough around that the kernel can live up to the promises it makes to your workload.
This has been a huge oversimplification. However, it's fairly accurate.
Ok, I am still not understanding what you're saying myself here.
So you're telling me, that even if I had a machine that had more ram than it even needed to use, it would STILL require a swap equal to or greater than.
This makes no logical sense to me. What I'm hearing is:
"yes, even though your machine has 16GB of physical ram, and even though your OS and 'applications' combined will only take up (lets say) 10GB (leaving 6GB 'too much') -- I STILL need at least (not like I'd say "at MOST") 16GB of swap"
To me that defies logic. If I have a machine that is configured to need 10GB of ram between OS and 'applications' and I 'run' it on a machine that has 32GB of physical ram, imho why would swap be technically *necessary* at all ???
Last edited by 343; 15-12-11 at 01:30 AM.
That's why I linked that long thread from the kernel hackers when they discussed precisely that a few years back.
Basically, what it comes down to is that even if you aren't "using" all the RAM in your system, the OS is using it for its own nefarious purposes (like caching hard disk reads). Many apps are aware of how Linux handles memory (e.g. promising more than it can actually give to apps, while using memory to speed up the system) and purposely request large blocks of memory so they have a better chance of getting it when/if they need it.
This is fully transparent to the user if there's enough swap, as the memory that has been overcommitted doesn't actually "live" in either physical memory or as swap: it's like the kernel gives an IOU that looks just like real money. So when some app goes to use memory it's been promised, it doesn't know whether it has an IOU or the real thing. The kernel handles all that transparently, provided it can shuffle things into swap that haven't been used in a while or can kill something to free up some memory (to further the money analogy: when a process collects on its IOU it thinks is real money, the kernel hurries up to try to rob Peter to pay Paul, except it has a license to kill).
So, basically, if you're a desktop user whose computer is just a way to play Frozen Bubble and to back up your pictures from your phone, you can probably get away easily with little swap.
Throw in light usage things like Firefox and Open, er, LibreOffice and you'll occasionally hit OOM situations where the kernel will kill things you aren't needing right then. You'll probably never notice.
Up the usage and add in more stuff running at once and your machine will "feel sluggish" and it'll be hard to pin down why unless you know where to be looking for the kernel's counters.
Use the system for "servery" stuff, such as a PW private server, and you can quite plausibly see maps and stuff crashing for no discernible reason. As I stated before, it all happens so fast that it's "just a crash" as far as the admin's concerned. Especially if it happens to be collateral damage from something else happening on the system. Remember that shared memory (shmfs) and /tmp (tmpfs) also use your memory/swap, so system cron jobs can foreseeably cause your server to be crashy with insufficient swap.
Seriously, though, if you're using secondhand machines for cheap (you're not the only one, 343, who looks for good deals!), you probably have a spare drive bay or two. Pick up a cheap 15K 36GB disk, use that for your 1:1 to 1.5:1 swap partition and then the rest for user home directories, "stash space", whatever. It just seems senseless to me to jeopardize stability to save a couple gigs of disk space.
Again, there's no guarantees that any particular installation will ever see a problem. But it certainly can't be ruled out in advance. That's just the way luck works when you're dealing with passing out IOUs.
Also, this isn't directed at you, personally, 343. It's more directed at everyone who's interested in how to make the most of their lower-end machines. If we all had servers with 32-64 GB RAM, we wouldn't be in this thread. ;)
Well then I don't even see where any debate or argument would be... I am all for 1:1 / PhysRam:Swp... I just can't see recommending to go any more than that because then you're going to find people thinking it's ok to try to run a PW server (all maps open) on a machine that only physically has 1GB of ram "oh, well it'll be okay as long as I create a 32GB swap" ~~~ NO, It wouldn't :rofl: Then again, yes, I really know that swap is 'necessary' , just from what I know/my experience I wouldn't go above 1:1... Also, with higher and higher amounts I don't 'see' a reason to even keep it 1:1; could I be wrong -- sure I never claimed to 'know it all'. I'm just saying, personally, if I ever had a machine that had 2TB of ram I sure in the hell wouldn't worry about making sure it had a matching (1:1) 2TB swap, rather I'd use 500GB (1:0.25) =O Enter, my 'chart'Spoiler:Spoiler:
Now if I could only find the true original source of that information =\
&& to be honest, I usually let the system set it / use 'default' values...
Yeah, like I've said, 1:1-1.5:1 is where people should be. Almost every distro sets to 1:1.
I'm willing to bet that by the time that you have 2TB RAM available in a box in front of you for tinkering with, you won't be caring about the 2TB disk space.I remember back when 500MB disks were a pipe dream, and now we're stuck for a bit with disks about 6 times that size until EFI takes off. Storage prices drop dramatically over time.
And if you could find the source to that chart, I'd be curious to see it. I've rarely seen anyone who's actually delved into the guts of Linux memory management advocate less than 1:1 (unless it's "off for specific reasons").
Perhaps one of these days, I'll have to sit down and write a "Swap, PW, and You" guide that shows how to get optimal performance out of swap with PW. (There are a few nifty tricks that seem obvious once you've done them!)
So, discussion is at a rest, I guess. Seems 343 and I agree upon "use 1:1 swap"...![]()
LIKE and AGREED
Use 1:1
Spoiler:
Yea, IDK if I will ever have a 2TB(/RAM) machine in my lifetime -- although I've been eying this one: HP ProLiant DL900 Servers - comparison results Small & Medium Business - HP which does support a max of 2TB of DDR3![]()
Mmm. I got the first ProLiant I've used in some years a couple months back. I gotta say that HP/Compaq recovered well from their server missteps when they merged. (Though it sure did take them long enough.) Sexy little machines, they are.
By the way, for those of you looking for el-cheapo secondhand servers, you probably shouldn't be looking at things older than about 2005 or so. Prior to that, you'll be finding that their RAM is a lot harder to get. Even better is 2007 or newer. And aim for a "name brand" server such as HP, IBM, or Sun (but make sure it's got a Xeon or Opteron with any of those since IBM and Sun make their own CPUs too!) since "custom built" servers will be harder to make sure you have proper firmware levels for and they're less likely to be 100% supported under Linux. Also, be completely sure you can fit at least 16GB RAM into the server.
(Trying to get the thread back on track, after I derailed it...)
I know, I am answering RBB's question... http://forum.ragezone.com/6702824-post13.html
add:
&& personally I didn't think it would, but I am testing that theory anyway...
Oh well it was a nice idea.
Went around and experimented with this today. In short: it does not work as you say, it's just a way to bulk load instances.
What it actually does it gs starts, fork itself and then load the specific map. It might be slightly faster to start up but it does not save any ram (as shown below). It also means once started the maps are fully independent (crash one != crash other).
Load at once (./gs 1 2)
(notice the parent process becomes 1 as the initial loader dies)Code:$ ps -C gs -o rss,pid,ppid,command RSS PID PPID COMMAND 80668 18112 17706 ./gs gs01 gsworld.cfg gs.cfg gsworldalias.cfg is20 (few seconds later) $ ps -C gs -o rss,pid,ppid,command RSS PID PPID COMMAND 420456 18118 1 ./gs gs01 gsworld.cfg gs.cfg gsworldalias.cfg 134024 18119 1 ./gs is20 gsworld.cfg gs.cfg gsworldalias.cfg
Load separately (./gs 1 ; ./gs 2)
Code:$ ps -C gs -o rss,pid,ppid,command RSS PID PPID COMMAND 419188 18023 17706 ./gs gs01 gsworld.cfg gs.cfg gsworldalias.cfg 133016 18052 17706 ./gs is20 gsworld.cfg gs.cfg gsworldalias.cfg
I will have to test that in full detail later on... All I know is that my previous tests (on my 32GB server) concluded that my release needs at least 16 GB of ram to run all maps simultaneously and safely... Also, I have never been able to open all maps at the same time on a machine with only 4GB using the 'original method'/"correct" method of loading one map at a time... However I HAVE been able to open all maps simultaneously using the above method on that same 4GB machine; I kept that server 'up' for 7 days before coming to the conclusion that it indeed does use less ram to start all the maps that way... (the server was up 7 days with no crashes)...