Ove_

Activity Graph

Page 1 of 1

2019-08-22

[00:56:05] Ove_: *.net *.split
[01:14:03] Ove_: has joined #elixir-lang

2019-07-22

[14:53:25] Ove_: Read error: Connection reset by peer

2019-07-09

[08:29:03] Ove_: Is there a way to force ipv4 lookups only?
[08:29:18] Ove_: For an elixir application
[08:31:54] Ove_: Doesnt seem to work with docker.
[08:32:07] Ove_: The host has ipv6 disabled.
[08:32:23] Ove_: But network interfaces still get created with ipv6 enabled.
[08:33:35] Ove_: Trying to spin up a container and elixir applications seems to try to do both ipv4 and ipv6 lookups together with musl.
[08:33:58] Ove_: In a way it is
[08:34:03] Ove_: But I do ops.
[08:34:14] Ove_: It's for $work.
[08:35:36] Ove_: What happens is that musl will try to resolve ipv6 and ipv4 at the same time. But doesnt return a record until ipv6 times out.
[08:35:54] Ove_: So I am looking for a way to disable ipv6 completely in the app.
[08:36:16] Ove_: And convincing service owners to change to a docker image with glibc is not a fight I am willing to take.
[08:36:29] Ove_: The thing is that disabling it in the os doesn't help.

2019-06-28

[22:33:26] Ove_: Ping timeout: 252 seconds
[22:34:12] Ove_: has joined #elixir-lang

2019-04-13

[16:50:10] Ove_: *.net *.split

2019-03-20

[20:08:04] Ove_: Read error: Connection reset by peer
[20:12:50] Ove_: has joined #elixir-lang
[23:05:08] Ove_: Read error: Connection reset by peer
[23:11:22] Ove_: has joined #elixir-lang

2019-03-09

[02:04:30] Ove_: Read error: Connection reset by peer
[02:06:07] Ove_: has joined #elixir-lang

2019-03-08

[03:23:53] Ove_: Read error: Connection reset by peer
[17:57:45] Ove_: Read error: Connection reset by peer
[17:58:00] Ove_: has joined #elixir-lang

2019-02-20

[08:24:47] Ove_: With the Ruby GC, I know what amount of heap_live_slots and heap_free_slots I max out with. Adding them totals 2.3 million. Does it make sense to set RUBY_GC_HEAP_INIT_SLOTS to that value and then don't allow more growing on the heap?
[15:27:12] Ove_: Read error: Connection reset by peer

2019-02-08

[10:29:32] Ove_: has joined #ruby
[10:29:46] Ove_: Is there a specific channel to puma?
[10:30:14] Ove_: I am having troubles getting any output from pumactl (when specifying the pid).

2017-05-27

[06:23:30] Ove_: has left #ruby: ()

2017-05-26

[20:15:31] Ove_: has joined #ruby
[20:24:41] Ove_: trying to log as different program_names and then match on the program name in syslog.
[20:24:52] Ove_: Seems ruby still sames everything through the same programname.

2017-05-20

[19:35:43] Ove_: has left #ruby: ()

2017-05-18

[10:03:48] Ove_: has left #RubyOnRails: ()

2017-05-17

[11:03:02] Ove_: has joined #ruby
[11:03:33] Ove_: I am seeing really high ActiveRecord::QueryCache#call (currently at 143 seconds) .
[11:03:36] Ove_: Anyway to fix it?
[11:08:11] Ove_: matthewd: All other queries runs very fast, so I don't know what the issue is.
[11:14:17] Ove_: matthewd: Had it been a too low connection pool wouldn't it also imply that I'd get a "couldn't get a database connection bla bla" ?
[11:17:33] Ove_: matthewd: I don't know, I am seeing this in newrelic, however in our own metrics I see an average of ~13 ms response times while on newrelic it says that I have a really high response time.
[11:19:08] Ove_: matthewd: Is it possible that newrelic reports the wrong metrics?
[11:20:22] Ove_: has joined #RubyOnRails
[11:26:13] Ove_: matthewd: Found the issue, or so I think.
[11:26:56] Ove_: They (the devs) are running multiple applications (shared code base, but running different tasks) under the same application in newrelic.
[11:30:57] Ove_: Yup, that was it.

2015-03-21

[13:13:57] Ove_: with puma and such and the database connection pool. Is the pool size per worker or all of the workers/threads?
[13:29:19] Ove_: apeiros: Thank you
[13:35:04] Ove_: Not a dev, didn't know where to ask. Will ask in #rubyonrails.
[13:35:07] Ove_: Thank you!
[13:44:44] Ove_: Is there a way to find out how many database connections out of the database pool that is actually used?
[13:45:10] Ove_: I think devs have been overly generous with the database pool size in our application.
[13:46:03] Ove_: They are using a size of 24 for 16 workers. That would be 16*24 right?
[13:50:03] Ove_: I mean, is that a sensible number of connections at all?
[13:50:25] Ove_: Not even a rule of thumb?
[13:50:43] Ove_: Monitor btw?
[13:52:17] Ove_: Not sure really.
[13:52:31] Ove_: We have about ~400-600 req/s spread across 4 servers.
[13:52:44] Ove_: But number of users?
[13:52:47] Ove_: In the millions.
[13:53:16] Ove_: Checking newrelic we get between ~15 and 30 ms per request.
[13:53:24] Ove_: Depending on the load.
[13:54:42] Ove_: The total of connections is like ~1.5k
[13:56:15] Ove_: And we seem to have around ~2.2k connections on the databaseserver in idle state.
[13:56:34] Ove_: Should be able to make the instance size smaller imo.
[13:56:40] Ove_: Pool size*
[13:57:38] Ove_: I should probably graph the number of idle versus non idle processes.
[13:57:42] Ove_: So I can backup my claims.
[13:59:12] Ove_: We have around ~12 active processes in postgres. :P
[14:00:48] Ove_: I need to get these into nice grafana stats.
[14:01:34] Ove_: I will experiment some with the pool size.
[14:03:24] Ove_: apeiros: YEah, I'll have to measure these over the next few days.
[14:22:40] Ove_: apeiros: You should see my graph in grafana. It's pretty sweet. :D