i followed the example for readline history blank-line exclusion as shown in the docs here ==> ruby-doc.org/stdlib-1.9.3/libdoc/readline/rdoc/Readline.html but the code from the example breaks history altogether, here's my code ==> http://termbin.com/3xxt
dag. my script gets killed for using to much ram. is it the ruby way to read n lines from a file, parse, and write them to temp file so i can read in n more lines without dying of ram
ruby obediently uses up all the memory. i think i've encountered a design error on my part. is rubyish to use a temp file?
biox: yes, exactly. parsing log files, putting them into hashes, correlating their parts
biox: thank you so much, i'm sure it will. things grow exponentially because of a nested loop
i'm dumping the output of the lastb command into a file and manipulating it
biox: i've done this in bash so many times but it always feels so clunky. i suppose i should just refactor my bash into better functions i can reuse, but the code when written in ruby is so easy to read
i do want to sort it. but i can just load the discard file back into memory and then sort, because it wasnt the file itself exhausting memory but the manipulation part while in memory. definitely looking up 'lazy enumerator'
right on. just looked it up and that may solve my problem, thanks a million :)