Activity Graph

Page 1 of 2 | Next »


[11:26:46] bawNg: Read error: Connection reset by peer


[15:43:02] bawNg: mikhael_k33hl: you can't trap KILL, it forces a process to terminate immediately without time for any interrupts to be processed


[03:57:54] bawNg: that's much better than the only options for packaging ruby apps for distribution a few years ago
[03:58:08] bawNg: I haven't needed to package anything for distribution recently
[03:59:19] bawNg: that's probably mostly rubygems fault
[03:59:40] bawNg: I think rubygems accounts for the vast majority of start up time, the actual ruby boot up is something like 50ms
[04:09:38] bawNg: seems rubygems is a lot faster than it used to be, but you're also not requiring any gems in that benchmark
[04:55:16] bawNg: JJonah: you could write a class method that generates that based on the attributes given
[10:00:30] bawNg: nginx is great, unicorn is good for a blocking ruby web server, thin is great for an eventmachine based one
[10:01:54] bawNg: nginx is a non-blocking reactor based server, it can handle hundreds of thousands of requests in a single thread, which is way more efficient than apache's threaded design
[10:02:04] bawNg: there is no reason to use apache for anything
[10:02:50] bawNg: you can use nginx for anything, it's a reverse proxy server, you put it in front of all your services and have it serve static files and proxy dynamic requests to your services
[10:03:15] bawNg: I'd recommend anyone who uses apache looks at migrating to nginx
[10:03:21] bawNg: apache is horrible for so many reasons
[10:04:21] bawNg: Unicorn is a great blocking ruby web server, designed very efficiently using a forking process model instead of threads, just like nginx
[10:04:44] bawNg: postgres is a great SQL database for the same reason, it is also based on forking processes
[11:02:50] bawNg: for any IO-bound application, async event based is the most efficient and ideal implementation
[11:03:11] bawNg: the vast majority of ruby applications that I've always built are eventmachine based
[11:04:13] bawNg: darix: why would you prefer puma over unicorn?
[11:05:14] bawNg: nodejs is a bit faster than ruby due to the very optimized V8 runtime which has a very good JIT, but most applications don't need to scale to the point that the difference really matters
[11:06:12] bawNg: the advantages of being able to use ruby outweight the performance benefit of using JS for server-side applications, at least for me
[11:06:47] bawNg: darix: well it's based on threads, not forked processes, so it will use less memory if CoW is not great
[11:07:14] bawNg: new ruby versions should have better CoW support, but it also depends on the implementation of your application
[11:07:44] bawNg: Unicorn's forked processes will give considerably better performance than puma's threads
[11:08:21] bawNg: threading in ruby is not very useful since the GVL means that only one thread can call into the ruby API or extensions at a time
[11:08:40] bawNg: so there are very few cases where multiple threads can even execute at the same time
[11:10:26] bawNg: I use unicorn to serve haml and slim pages very fast, nginx to serve static files, and then do long polling or websockets to an eventmachine based backend for everything after the page is loaded
[11:11:59] bawNg: I use sinatra for web requests and roll almost everything else from scratch myself
[11:12:36] bawNg: if you want a nodejs alternative, look at eventmachine
[11:12:45] bawNg: for any IO-bound application, it is ideal
[11:14:27] bawNg: personally not a fan, but it would work too
[11:14:52] bawNg: you'll probably want to do some research and see what libraries that you need are supported by eventmachine and celluloid, and decide based on that
[11:15:26] bawNg: I end up writing my own eventmachine based libraries from scratch most of the time, or I monkeypatch blocking gems to be EM-based
[11:16:01] bawNg: there are a lot of EM libraries available, but most of the time they don't suit my needs for various reasons
[22:37:07] bawNg: tejasvi5: just like that
[22:39:01] bawNg: yes, it's up to each person to setup their IRC client to highlight their name
[22:44:39] bawNg: wait, how does my actuator gem have 492 downloads already after less than 3 days on rubygems? I'd expect very few people to even have a use case for it, and out of those none would have even know that it exists yet
[22:46:14] bawNg: that's a lot of bots


[04:18:53] bawNg: nchambers: yeah it is
[09:05:37] bawNg:


[14:55:35] bawNg: well and rake-compiler
[14:55:56] bawNg: well that's a given, haha
[14:57:18] bawNg: maybe one day I'll be able to find time to wrap libuv and add IO support to this gem, then I can use it instead of eventmachine for many applications, but I doubt I'll get to that anytime soon
[14:57:47] bawNg: oh damn, I forgot to revert my test change
[14:57:57] bawNg: check the top of setup_test
[14:58:12] bawNg: I'll push a fix for that just now
[15:00:13] bawNg: I published a fix for the tests
[15:00:51] bawNg: I'm not too worried about warnings in the C/C++ compilation, they should all be harmless
[15:01:05] bawNg: I'll try reduce the amount of warning spam when I rewrite more ruby into native
[15:03:16] bawNg: that sounds very broken
[15:03:19] bawNg: are you on OSX?
[15:03:38] bawNg: damn, that sucks
[15:03:52] bawNg: I will have to try figure out why that is broken, without being able to reproduce and test
[15:04:29] bawNg: I use as little C++ as possible :P
[15:04:40] bawNg: this is the most C++ I've used in years actually
[15:04:53] bawNg: I can't wait for Jai to be released, so that I can use it instead of C and C++
[15:06:17] bawNg: apeiros: what you can try do for me, is enable debug log level and set a debug log file so that it goes to the file instead of spamming stdout
[15:06:39] bawNg: the debug log level will spam a lot, but hopefully it'll include something useful, otherwise I'll have to add even more
[15:07:08] bawNg: Log.file_path = "actuator.log"
[15:07:16] bawNg: Log.level = :debug
[15:07:33] bawNg: I didn't test any of that, since I added it just for the gem, hopefully it works :P
[15:08:38] bawNg: yeah you can just put it at the top after the gem is required
[15:09:19] bawNg: Jai is a systems programming language that aims to be a C/C++ replacement that is ideal for game devs, it allows you to write code that is even more efficient than C++, it has no headers, it compiles a full game and engine in 500ms instead of an hour like C++
[15:09:47] bawNg: it has insanely powerful compile-time metaprogramming and polymoric support that puts all other languages to shame
[15:10:01] bawNg: it will hopefully be released by this time next year
[15:10:37] bawNg: well it's done, I've been tracking the development of it for a while now
[15:10:45] bawNg: the game engine will be open sourced with the language
[15:11:09] bawNg: almost all developers don't write efficient code these days
[15:11:19] bawNg: compiler devs are apparently no different
[15:11:32] bawNg: but even C++ vtables are slow due to indirection and cache invalidation
[15:11:36] bawNg: Jai doesn't have that problem
[15:12:31] bawNg: Rust forces everyone to learn new ways of writing code that ensure you can't make a mistake, all the safety introduces overhead and the learning curve of learning all the different ways of doing things requires time
[15:12:53] bawNg: Jai gives you the power to do things however you like, you can be unsafe and break things, there are no training wheels
[15:13:15] bawNg: but that means that you can write insanely efficient code, which works very well with the CPU caches
[15:13:41] bawNg: Jai will even support transparent SoA out of the box
[15:15:27] bawNg: I'm really excited about a lot of the problems that Jai solves, many of which most programmers and languages don't even consider to be problems
[15:16:16] bawNg: these days writing inefficient software that wastes CPU cycles is just considered normal, hardware is big enough to handle it in many cases, but you could do so much more in so much less time if you actually did things efficiently
[15:17:40] bawNg: some do, but many things introduce overhead, and all that overhead adds up
[15:18:59] bawNg: dminuoso: do you know who Jonathan Blow is?
[15:19:27] bawNg: he's created almost all of Jai, the engine and the game written in it
[15:19:51] bawNg: he created Braid and The Witness
[15:20:06] bawNg: I've never played The Witness, but Braid was a serious work of art
[15:21:28] bawNg: mruby needs some serious optimization because it will be feasible to use it in any large scale game
[15:21:44] bawNg: hopefully it gets a JIT after MRI
[15:22:42] bawNg: I would love to be able to use mruby for game server scripting, I've been using C, C++, C# and SourcePawn for the last 10 years
[15:23:03] bawNg: I've been hoping to be able to use mruby since Matz just started working on it, but it's still not fast enough
[15:23:48] bawNg: definitely not python, it's even slower than ruby, and any language which isn't native to the engine is pointless since the overhead of a bridge and serialization is way too much
[15:24:20] bawNg: I used LuaJIT for one engine a few years ago, not the best language but reasonably fast at least
[15:27:53] bawNg: #send could have been named better, it gets replaced by IO related methods in a lot of libraries
[15:30:31] bawNg: how would that be different from #instance_variables?
[15:30:57] bawNg: apeiros: oh I missed what you said about #file_path earlier, are you sure it rejects a string?
[15:31:45] bawNg: I copied the string check from the ruby core code base, since the STRING_P macro is defined in a different header
[15:32:03] bawNg: (RB_TYPE_P(path, T_STRING) && CLASS_OF(path) == rb_cString) should allow a string
[15:32:55] bawNg: ah, the symbol check macro must raise an exception if false
[15:33:03] bawNg: that's stupid, I'll figure out what macro just returns false
[15:34:30] bawNg: no, I mean to check if a value is a symbol
[15:35:03] bawNg: looking at the macro in the core, I don't see why SYMBOL_P would raise an exception
[15:35:16] bawNg: that argument type error definitely isn't coming from my code
[15:35:30] bawNg: unless SYMBOL_P returns true on a string too
[15:39:38] bawNg: oh I just made a stupid mistake with the implementation
[15:42:55] bawNg: apeiros: I've published and pushed a new version
[15:44:13] bawNg: you can just kill it when it starts spamming warnings
[15:44:18] bawNg: since that means something is very wrong
[15:46:17] bawNg: plsql does not sound very fun
[15:50:16] bawNg: I guess getting the clock time on OSX is failing completely then
[15:50:29] bawNg: I wish I had a mac to test with
[15:50:59] bawNg: do you know if __APPLE__ is defined when you compile an extension?
[15:52:05] bawNg: looks like the ruby core source is full of __APPLE__ #ifdefs, so it probably should be defined
[15:54:13] bawNg: I don't see any #ifdef's that include DARWIN in the ruby core source, and the only OSX one is MACOSX_DYLD, which seems like something specific that isn't relevant
[15:54:51] bawNg: but it seems most likely that you're somehow compiling without __APPLE__ being defined, since that would explain 0 being returned for the clock
[16:01:52] bawNg: everything points to __APPLE__ being the correct define, I guess I could detect OSX in the extconf and add my own define, but that doesn't seem like the best solution
[16:05:35] bawNg: apeiros: what compiler are you using? apparently __APPLE__ will likely only work on gcc, clang and intel compilers
[16:05:53] bawNg: so your ruby must have been compiled with one of those, but maybe you are compiling extensions with a different compiler?
[16:09:42] bawNg: hmm, strange
[16:30:18] bawNg: apeiros: it's something so small that is needed, will be worth getting to the bottom of it
[16:30:39] bawNg: I'm 99% sure that the code in the extension will just work on OSX, if that define was there
[16:31:03] bawNg: so I'll probably just have to detect OSX and add my own define from extconf
[16:31:14] bawNg: it's just shit to not be able to test it myself
[16:31:53] bawNg: but ideally I'd really like to know why __APPLE__ isn't defined on your system, with your compiler
[16:32:01] bawNg: since everything seems to say that it should be
[16:32:12] bawNg:
[16:38:30] bawNg: apeiros: I added support for the __MACH__ define and made compilation fail if no clock support is found, you can try build it again whenever you get a chance
[16:38:45] bawNg: if that doesn't work, I'll have to just try hack the detection into extconf
[16:45:12] bawNg: apeiros: nevermind, I was just being dumb, there was a missing line for OSX that I didn't notice, should hopefully be working now
[16:50:18] bawNg: oh, you should have probably disabled the log, that will slow it down
[16:52:10] bawNg: that's either because your machine is loaded, or the log is slowing it down too much
[16:52:18] bawNg: it'll retry the precision test a bunch until it passes
[16:53:29] bawNg: well, if theres no CPU time available exactly when a timer is due, the scheduler thread will wake up late, so a moderate load
[16:53:41] bawNg: but that's probably just the log file slowing it down
[16:54:29] bawNg: I made the the precision test much more relaxed than it should really be, so that it passes on really low end hardware
[16:54:42] bawNg: it passes on a raspberry pi v1, which is really slow
[16:55:03] bawNg: yeah it has a very small footprint, the test should use almost no measurable CPU
[16:55:14] bawNg: most of the CPU used is the ruby code in the test itself
[16:56:06] bawNg: let me know what the precision results are like on OSX with plenty of free CPU time
[16:56:21] bawNg: will be interesting to see if it's better or worse than the clock on Windows
[16:57:12] bawNg: the median?
[16:57:38] bawNg: that seems really high, so either the clock is inaccurate, or the thread sleeping is
[16:59:27] bawNg: this is the result on my windows 7 desktop which is reasonably loaded - Total: 5000, Over 1ms: 1, Low: 0.4 us, High: 1172.4 us, Median: 2.2 us, Variance: 20.4 us
[16:59:40] bawNg: my machine is using > 50% CPU constantly
[17:00:01] bawNg: which accounts for the single sample over 1ms
[17:01:32] bawNg: thanks for the results, definitely should be better, I'll have to put some thought into it and come up with some more tests for you to run
[17:16:45] bawNg: yeah these days people use libraries for the most simple things that can be implemented with a single line of code
[17:17:57] bawNg: once you get past the names of things, working with ruby from C isn't really that much different from ruby itself
[17:18:24] bawNg: you work with ruby objects pretty much the same way, everythings just a VALUE in C, just like everythings an objects in ruby
[17:18:58] bawNg: yeah, handling GC properly is one thing that is easy to mess up
[17:19:18] bawNg: and can be rough to track down, but once you get a handle on it, it's not all that bad
[17:23:20] bawNg: yup, that's when GC usually kicks in, at some completely random time haha
[17:25:11] bawNg: absolute hell to track down
[17:41:29] bawNg: dminuoso: if you ever have a segfault problem, you can always just install neversaydie, haha
[17:42:10] bawNg: I don't think we will ever know for sure
[17:42:30] bawNg: I think it's hilarious
[17:44:08] bawNg: might as well reboot when there is a warning too
[17:44:20] bawNg: ruby warnings are serious business
[17:47:18] bawNg: I've seen some really dodgy things built with Pry
[17:49:30] bawNg: I've watched quite a few RubyConf videos over the years, never saw that though
[17:51:34] bawNg: ah, an AU RubyConf
[17:54:25] bawNg: apeiros: so I did a bit of research, and it seems like it's possible that the clock returned by that system call on your version of OSX could not be so high precision, but most likely it is the thread sleeping that is causing the precision loss, and it sounds like the only way to solve that is to set the threads scheduling priority to real-time on OSX
[17:54:47] bawNg: but there is conflicting information and benchmark results online, so it's hard to say for sure
[17:56:06] bawNg: the one results show that OSX is more precise with thread sleeps than both linux and windows, and didn't mention anything about thread priority, but the official apple docs say that high priority is needed for good precision, and that more than 500 us without load is considered a serious error with high priority
[17:56:27] bawNg: so that makes me wonder what the expected precision even is with high priority
[17:57:08] bawNg: it seems like solving this will require some testing, trial and error, so I'm probably going to have to just leave OSX with worse precision for now
[17:57:25] bawNg: hopefully someone with OSX can look into it and make a patch for it
[17:57:55] bawNg: otherwise I'll have to put together some tests for you or someone else with OSX to run at some point
[17:58:05] bawNg: remote debugging is never fun
[17:58:57] bawNg: maybe dminuoso can have a look at it at some point
[18:00:23] bawNg: well you have access to OSX, and know C, so you're in a better position than anyone else I know
[18:00:33] bawNg: it shouldn't be hard to solve this, it'll just require some testing
[18:01:08] bawNg: need to figure out if the system call which returns the clock is accurate, and test making the thread real-time
[18:01:18] bawNg:
[18:02:58] bawNg: dminuoso: there's basically just 3 things that could be the issue, so I just need test results to figure out which is to blame
[18:04:05] bawNg: either the clock being returned by mach_absolute_time() is inaccurate, which is unlikely. else the thread is not waking up soon enough, and that is either because of the threads priority not being set to real-time, or because the ruby API is not calling the same thing are mach_wait_until() which is documented at the link above
[18:27:34] bawNg: require_relative 'some_file'
[18:27:56] bawNg: that is relative to the file requiring, instead of the working directory
[18:28:44] bawNg: autoload is commonly used by some gems, it seems easy enough to use, I don't personally ever use it though
[18:29:12] bawNg: I create my own custom loading system for cases where I want dynamic module loading
[18:30:19] bawNg: autoload is a stdlib yeah
[18:32:27] bawNg: oh right, it's literally on module
[18:32:31] bawNg: couldn't be easier to use
[18:34:21] bawNg: yeah, I don't like autoload very much for a bunch of reasons
[18:35:01] bawNg: sometimes implicit module loading in long running modular applications makes sense, but then I end up having to build a full dependency management system along with plugin loading, since with implicit loaded modules, dependency modules and super classes need to be loaded first
[18:35:34] bawNg: I've done exactly that in a few languages over the years, probably at least 4 different times in ruby alone
[18:36:21] bawNg: EventMachine is your best option for async IO
[18:36:56] bawNg: I'd love to add IO support to my reactor eventually, so that I can use it instead of EM, since EM is not implemented anywhere near as efficiently as it could be
[18:37:15] bawNg: but it's good enough for pretty much any normal use case
[18:38:03] bawNg: almost all the ruby applications I have built for the last 8 years are EM-based
[18:38:39] bawNg: I didn't like celluloid very much when I looked at it years ago, can't remember exactly why
[18:39:18] bawNg: but I have also built so much EM-related code up until this point, that changing to some other reactor doesn't make sense unless it's really worth it
[18:40:14] bawNg: I have monkeypatched more gems than I can count to be EM-based and/or fibered, and I've implemented even more protocols from scratch for EM applications
[18:41:37] bawNg: I personally use require 'bundler/setup'; Bundler.require(:default)
[18:41:41] bawNg: instead of bundle exec
[18:43:05] bawNg: I'd probably just wrap libuv if I ever find the time to add IO support to my reactor
[18:43:47] bawNg: I wrapped libuv in C# which I use for async IO embedded in game servers and other server-side applications
[18:43:57] bawNg: it's what nodejs is built on
[18:48:42] bawNg: the auto fibers are an interesting idea, maybe if that works out soon enough, I can just use that and not have to implement IO reactor support at all
[18:49:08] bawNg: I just want to be able to do IO and high precision timers in a single thread
[18:49:39] bawNg: I didn't even want to write my own reactor, but I profiled a bunch of ruby reactors like EM and a libuv wrapper, and they all have insanely bad timer precision
[18:50:36] bawNg: whatever the name ends up being, it's going to confuse a lot of people
[18:51:08] bawNg: most people already can't wrap their heads about a fiber, now they are going to have to know the difference between 3 different kinds of stack based interactions
[18:52:02] bawNg: nchambers: it's never really been active, you can ask here for advice about EM
[18:52:55] bawNg: dminuoso: but I mean, fibers are even more simple, in that there is nothing automagically happening at all
[18:53:24] bawNg: but a lot of people can't even seem to wrap their heads around how a fiber/coroutine works, including people who use the things
[18:54:47] bawNg: 1 ns precision
[18:54:57] bawNg: windows is 250ns
[18:55:05] bawNg: OSX is who knows
[18:56:22] bawNg: yeah, games use nano second sleep, which must introduce so much unnecessary overhead with all the context switches
[18:56:58] bawNg: dionysus69: did you run `bundle` first?
[18:59:06] bawNg: dminuoso: there are some parts of the kernel that should actually be implemented in user space, it's 2018
[19:00:11] bawNg: large companies like google use network drivers implemented in user space, which minimizes latency and saves on all the copies of buffering the data that the kernel would do, there are a bunch of other advantages too
[19:00:38] bawNg: game servers should be using user space networking, but we're still using kernel space stuff from the 70s
[19:01:21] bawNg: yeah, security is one of the other big things
[19:01:56] bawNg: the recent major exploits that are currently being patched by all the kernels can compromise networks in a huge way
[19:02:05] bawNg: that wouldn't be an issue if it was in user space
[19:03:20] bawNg: you can, but it depends on a bunch of factors
[19:03:52] bawNg: the performance loss caused by patching the exploits will affect all software pretty badly
[19:04:19] bawNg: in some cases, over 35% worse performance, just because of heavy use of vtables in C++, and various other things
[19:04:38] bawNg: we basically just lost many years of CPU hardware advances due to the patches
[19:06:02] bawNg: when software is compiled with the patched compilers, all caches will be invalidated constantly to prevent the exploit, which destroys performance in many cases
[19:06:24] bawNg: when all OS kernels have been patched, that introduces a bunch more overhead
[19:15:59] bawNg: eam: there are open source user space network stacks
[19:16:16] bawNg: large companies like google use user space stacks almost exclusively
[19:17:29] bawNg: I'd need to figure out where my source for that information came from
[19:19:45] bawNg: example of why kernel based network stack is bad:
[19:20:48] bawNg: also a good example of the CPU performance impact of patching the exploits
[19:21:38] bawNg: Fuchsia seems promising
[19:22:00] bawNg: nchambers: you can run any number or servers and clients from a single reactor
[19:22:26] bawNg: eam:
[19:23:49] bawNg: the overhead just keeps piling on
[19:26:21] bawNg: maybe we really need a mostly-but-not-totally userspace network stack
[19:26:47] bawNg: if the kernel could just control handling over connections to a userspace stack, that could take care of the separation
[19:27:13] bawNg: nchambers: it really depends on the project
[19:30:43] bawNg: eam: well at least you now know that userspace networking can be performant
[19:31:00] bawNg: I personally didn't know about that open source stack until a couple months ago either
[19:37:32] bawNg: you could, it may just require some effort
[19:38:00] bawNg: I'm fairly sure companies are using inhouse userspace stacks built on that exact open source toolkit
[19:38:14] bawNg: sure, it's not ideal for that use case
[19:39:02] bawNg: but large high performance latency-sensitive services are generally hosted on dedicated hardware, where it would make sense to have a dedicated network stack
[19:40:32] bawNg: if there is a huge amount of throughput, then the gain is in avoiding all that redundant copying for kernel buffers, and all the CPU time needed by the kernel to do that
[19:41:17] bawNg: rubycoder38: what kind of API?
[19:41:44] bawNg: and what kind of application are you building?
[19:44:59] bawNg: rubycoder38: then you could just use a simple loop with a blocking HTTP request and appropriate response/error handling, or you could use EventMachine to register a timer and have the request asynchronously
[19:46:23] bawNg: dminuoso: most boxes don't have that amount of throughput, but sure kernels could support it better
[19:47:48] bawNg: rubycoder38: store their information in a Hash using some unique info as a key
[19:48:00] bawNg: like an ID or something else unique
[19:49:41] bawNg: biggest impact by far is cloud services
[19:51:08] bawNg: kekolodeon: the MRI internals aren't nearly as bad as people make them out to be
[19:54:07] bawNg: kekolodeon: also make sure you look at the ruby source for the version you are using, if you're not using 2.5, since things change
[19:54:43] bawNg: yeah, pry is useful
[19:55:55] bawNg: I've added to the known issues for Actuator that timer precision on OSX is shit, hopefully someone who has OSX and knows a little C will be able to investigate further