Displaying #cassandra-dev/2016-12-01.log:

Thu Dec 1 00:08:03 2016  Vijay:Joined the channel
Thu Dec 1 00:40:27 2016  dikang:Joined the channel
Thu Dec 1 01:27:12 2016  dikang:Joined the channel
Thu Dec 1 02:25:41 2016  kohlisankalp:Joined the channel
Thu Dec 1 02:27:07 2016  clohfink:Joined the channel
Thu Dec 1 02:37:46 2016  mstepura:Joined the channel
Thu Dec 1 02:49:47 2016  RussSpitzer:Joined the channel
Thu Dec 1 02:51:34 2016  mstepura:Joined the channel
Thu Dec 1 03:32:00 2016  mstepura:Joined the channel
Thu Dec 1 03:52:44 2016  mstepura:Joined the channel
Thu Dec 1 03:55:50 2016  mstepura:Joined the channel
Thu Dec 1 03:57:15 2016  mstepura:Joined the channel
Thu Dec 1 03:58:06 2016  mstepura:Joined the channel
Thu Dec 1 03:59:00 2016  mstepura:Joined the channel
Thu Dec 1 04:04:42 2016  mstepura:Joined the channel
Thu Dec 1 04:05:27 2016  mstepura:Joined the channel
Thu Dec 1 04:09:38 2016  mstepura:Joined the channel
Thu Dec 1 04:10:24 2016  mstepura:Joined the channel
Thu Dec 1 04:20:05 2016  mstepura:Joined the channel
Thu Dec 1 04:23:55 2016  dikang:Joined the channel
Thu Dec 1 04:29:15 2016  mstepura:Joined the channel
Thu Dec 1 04:30:10 2016  mstepura:Joined the channel
Thu Dec 1 04:32:36 2016  mstepura:Joined the channel
Thu Dec 1 04:33:21 2016  mstepura:Joined the channel
Thu Dec 1 04:34:05 2016  mstepura:Joined the channel
Thu Dec 1 04:35:38 2016  mstepura:Joined the channel
Thu Dec 1 04:36:36 2016  mstepura:Joined the channel
Thu Dec 1 04:37:21 2016  mstepura:Joined the channel
Thu Dec 1 04:38:11 2016  mstepura:Joined the channel
Thu Dec 1 04:40:48 2016  mstepura:Joined the channel
Thu Dec 1 04:45:52 2016  Vijay:Joined the channel
Thu Dec 1 05:12:48 2016  Vijay:Joined the channel
Thu Dec 1 05:22:20 2016  mstepura:Joined the channel
Thu Dec 1 05:44:37 2016  kohlisankalp:Joined the channel
Thu Dec 1 05:50:34 2016  mstepura:Joined the channel
Thu Dec 1 05:51:13 2016  mstepura:Joined the channel
Thu Dec 1 05:52:12 2016  mstepura:Joined the channel
Thu Dec 1 05:53:08 2016  mstepura:Joined the channel
Thu Dec 1 05:54:43 2016  mstepura:Joined the channel
Thu Dec 1 05:55:50 2016  mstepura:Joined the channel
Thu Dec 1 05:56:59 2016  mstepura:Joined the channel
Thu Dec 1 05:57:49 2016  mstepura:Joined the channel
Thu Dec 1 05:58:41 2016  mstepura:Joined the channel
Thu Dec 1 06:18:33 2016  mstepura:Joined the channel
Thu Dec 1 06:56:19 2016  mstepura:Joined the channel
Thu Dec 1 07:33:27 2016  marcuse:rustyrazorblade: yep, looks like a bug
Thu Dec 1 07:36:10 2016  kvaster:Joined the channel
Thu Dec 1 07:50:18 2016  gila:Joined the channel
Thu Dec 1 08:17:12 2016  spodkowinski:Joined the channel
Thu Dec 1 08:27:30 2016  driftx:Joined the channel
Thu Dec 1 08:39:01 2016  JoeyI_:Joined the channel
Thu Dec 1 08:52:09 2016  minimarcel_:Joined the channel
Thu Dec 1 10:00:23 2016  urandom_:Joined the channel
Thu Dec 1 10:05:28 2016  minimarcel_:Joined the channel
Thu Dec 1 10:43:33 2016  ifesdjeen:Joined the channel
Thu Dec 1 10:43:33 2016  gdusbabek:Joined the channel
Thu Dec 1 10:43:33 2016  charliek:Joined the channel
Thu Dec 1 12:09:16 2016  clohfink:Joined the channel
Thu Dec 1 12:22:21 2016  RussSpitzer:Joined the channel
Thu Dec 1 13:27:02 2016  kvaster:Joined the channel
Thu Dec 1 13:41:32 2016  mebigfatguy:Joined the channel
Thu Dec 1 13:51:00 2016  rustyrazorblade:Cool. I'll put up a patch today.
Thu Dec 1 13:51:08 2016  rustyrazorblade:Thanks marcuse
Thu Dec 1 13:53:17 2016  adamholmberg:Joined the channel
Thu Dec 1 13:56:21 2016  thobbs:Joined the channel
Thu Dec 1 14:10:19 2016  minimarcel_:Joined the channel
Thu Dec 1 14:24:46 2016  clohfink:Joined the channel
Thu Dec 1 14:36:57 2016  jasobrown:rustyrazorblade: you do realize you'll have to write that patch in java, not rustlang, correct? :P
Thu Dec 1 14:37:30 2016  rustyrazorblade:not if I rewrite compaction in rust
Thu Dec 1 14:37:55 2016  jasobrown:lmao - good luck with that
Thu Dec 1 14:38:01 2016  rustyrazorblade:never happening
Thu Dec 1 14:38:20 2016  rustyrazorblade:i was giving some consideration though about writing rust based memtables
Thu Dec 1 14:38:30 2016  jasobrown:just rewrite the entire database in rust, what could go wrong?
Thu Dec 1 14:38:45 2016  rustyrazorblade::)
Thu Dec 1 14:58:01 2016  adamholmberg:Joined the channel
Thu Dec 1 15:00:33 2016  kohlisankalp:Joined the channel
Thu Dec 1 15:24:03 2016  jfarrell:Joined the channel
Thu Dec 1 16:21:00 2016  gila:Joined the channel
Thu Dec 1 16:25:48 2016  gila:Joined the channel
Thu Dec 1 16:44:05 2016  gila:Joined the channel
Thu Dec 1 16:51:30 2016  thobbs:Joined the channel
Thu Dec 1 16:59:33 2016  kohlisankalp:Joined the channel
Thu Dec 1 17:15:20 2016  Vijay:Joined the channel
Thu Dec 1 17:18:14 2016  mstepura:Joined the channel
Thu Dec 1 17:31:55 2016  mstepura:Joined the channel
Thu Dec 1 17:32:00 2016  minimarcel_:Joined the channel
Thu Dec 1 18:02:04 2016  kohlisankalp:Joined the channel
Thu Dec 1 18:05:19 2016  dikang:Joined the channel
Thu Dec 1 18:09:39 2016  Vijay:Joined the channel
Thu Dec 1 18:14:45 2016  zaller:Joined the channel
Thu Dec 1 18:18:09 2016  mstepura:Joined the channel
Thu Dec 1 18:23:23 2016  aboudreault_:Joined the channel
Thu Dec 1 18:25:15 2016  zaller:Joined the channel
Thu Dec 1 18:36:06 2016  Vijay_:Joined the channel
Thu Dec 1 19:01:50 2016  jfarrell:Joined the channel
Thu Dec 1 19:18:04 2016  dikang:Joined the channel
Thu Dec 1 19:30:40 2016  TvdW:Joined the channel
Thu Dec 1 19:34:56 2016  urandom:so #9724 is quite a thing
Thu Dec 1 19:34:58 2016  CassBotJr:https://issues.apache.org/jira/browse/CASSANDRA-9724 (Resolved; Not A Problem; Unscheduled): "Aggregate appears to be causing query to be executed multiple times"
Thu Dec 1 19:35:07 2016  urandom:errr
Thu Dec 1 19:35:10 2016  urandom:#9754
Thu Dec 1 19:35:10 2016  CassBotJr:https://issues.apache.org/jira/browse/CASSANDRA-9754 (Awaiting Feedback; Unresolved; 4.x): "Make index info heap friendly for large CQL partitions"
Thu Dec 1 19:35:39 2016  urandom:i may eventually finish reading all the comments
Thu Dec 1 19:36:01 2016  urandom:after i take a quick break and re-read War and Peace
Thu Dec 1 19:38:37 2016  urandom:i wonder what this does, realistically, for the prospect of large(r) partitions
Thu Dec 1 19:53:52 2016  rustyrazorblade:it's quite a beast
Thu Dec 1 20:03:37 2016  gila:Joined the channel
Thu Dec 1 20:09:33 2016  mstepura:Joined the channel
Thu Dec 1 20:23:28 2016  jeffj:urandom: gets rid of indexes as you know it, turns it into b+ tree, all garbage on reading index goes away
Thu Dec 1 20:47:42 2016  urandom:jeffj: yeah, seems like it's definitely a huge win; it makes me wonder how much it moves the bar on reasonable partition sizes in practice
Thu Dec 1 20:49:09 2016  driftx:a handful of GB was always reasonable... just poor in index impl/heap usage
Thu Dec 1 20:49:35 2016  urandom:handful of GB? :)
Thu Dec 1 20:49:48 2016  urandom:everyone is always so cautious in throwing out numbers here :)
Thu Dec 1 20:50:20 2016  urandom:driftx: also, you are the first i've heard to say that, everyone else says "handful of MB"
Thu Dec 1 20:50:28 2016  jeffj:i think kjellman said he ran up to 150 or 250g partitions
Thu Dec 1 20:50:42 2016  jeffj:"sort of a big deal"
Thu Dec 1 20:50:51 2016  driftx:I bet, barring the index problem, jeffj would agree on handful of GB
Thu Dec 1 20:50:51 2016  dikang:Joined the channel
Thu Dec 1 20:50:51 2016  urandom:yeah, but what he was testing was sort of limited no?
Thu Dec 1 20:52:14 2016  jeffj:urandom: certainly was.
Thu Dec 1 20:52:18 2016  urandom:i really do need to finish trying to make it through that ticket
Thu Dec 1 20:52:40 2016  urandom:just as soon as i have it bound into a hard-cover book
Thu Dec 1 20:52:41 2016  urandom::)
Thu Dec 1 20:54:22 2016  urandom:handful of GB must assume you would never query without sufficiently restricting by cluster columns
Thu Dec 1 20:54:31 2016  urandom:i don't see how you'd make that OK
Thu Dec 1 20:54:49 2016  jeffj:i'm trying to decide how i want to answer that
Thu Dec 1 20:55:04 2016  jeffj:"handful of GB means you're probably only ever reading the head or tail most of the time"
Thu Dec 1 20:55:27 2016  urandom:agreed
Thu Dec 1 20:56:19 2016  urandom:"most of the time" :)
Thu Dec 1 20:57:43 2016  driftx:handful of GB is usually RYO index
Thu Dec 1 20:58:12 2016  thobbs:RYO?
Thu Dec 1 20:58:21 2016  urandom:roll your own, i assume
Thu Dec 1 20:58:28 2016  thobbs:ahh
Thu Dec 1 20:58:32 2016  urandom:driftx loves him some acronyms :)
Thu Dec 1 20:58:41 2016  driftx:yeah
Thu Dec 1 20:58:42 2016  urandom:Java has destroyed him in this regard
Thu Dec 1 20:59:08 2016  driftx:C* has too. Is CL commitlog, or consistency level? heh
Thu Dec 1 20:59:17 2016  urandom:yes
Thu Dec 1 21:00:02 2016  urandom:i like how you avoided typing c-a-s-s-a-n-d-r-a there, too :)
Thu Dec 1 21:00:22 2016  urandom:it's only a matter of time before you start dropping the '*'
Thu Dec 1 21:00:38 2016  driftx:I actually don't like that acronym, makes me have to press shift.
Thu Dec 1 21:00:48 2016  driftx:but less than the full gamut
Thu Dec 1 21:01:29 2016  urandom:i never liked it to be honest, makes me think it's some new fangled MS programming language
Thu Dec 1 21:04:41 2016  driftx:btw, not that it's ok at all to do, but the current record holder I have for largest partition is 529.54253GB
Thu Dec 1 21:04:54 2016  urandom:holy shit balls
Thu Dec 1 21:04:58 2016  driftx:customers.
Thu Dec 1 21:05:09 2016  driftx:ヽ(¯͒⌢͗¯͒)ノ
Thu Dec 1 21:05:14 2016  urandom:my record is 30G
Thu Dec 1 21:08:03 2016  Vijay:Joined the channel
Thu Dec 1 21:09:17 2016  jeffj:could you read from it?
Thu Dec 1 21:09:37 2016  urandom:yeah
Thu Dec 1 21:09:44 2016  jeffj:i guess with a large enough heap and high enough timeouts, anything's possible
Thu Dec 1 21:10:29 2016  jeffj:what sucks in our environment is that reads for a partition tend to come in rapid succession, so a 10G partition that generates a ton of garbage is going to get ~1k reads/second for 10 seconds, and we would have blown up in the past.
Thu Dec 1 21:10:38 2016  urandom:we could read the "latest" of a clustering column when restricted by the preceding clustering column
Thu Dec 1 21:10:39 2016  jeffj:(obviously now we protect against that in various ways)
Thu Dec 1 21:11:00 2016  urandom:any attempt at reading the entire partition was doomed to fail
Thu Dec 1 21:11:45 2016  urandom:and, we also OOM from time to time when reads like this coincide at inopportune times, so maybe i should have answered "most of the time"
Thu Dec 1 21:12:20 2016  driftx:well of course reading the whole partition won't work, but 30G is enough to OOM on the index junk
Thu Dec 1 21:12:33 2016  driftx:and all you have to do is touch any piece of it
Thu Dec 1 21:12:42 2016  urandom:we have 12G heaps
Thu Dec 1 21:13:07 2016  driftx:so like two concurrent reads on it and you're toast? heh
Thu Dec 1 21:13:11 2016  jeffj:did anyone ever quanitfy the garbage created on read?
Thu Dec 1 21:13:20 2016  urandom:no
Thu Dec 1 21:13:30 2016  urandom:and it didn't hang around long after being identified
Thu Dec 1 21:13:42 2016  urandom:it was deleted and blacklisted
Thu Dec 1 21:13:42 2016  jeffj:" If a CQL partition is say 6,4GB, it will have 100K IndexInfo objects and 200K ByteBuffers. "
Thu Dec 1 21:14:08 2016  urandom:yeah, i read this somewhere...
Thu Dec 1 21:14:08 2016  driftx:jeffj: didn't we do that? Something like 4.5GB of garbage on a...3G partition, I think?
Thu Dec 1 21:14:35 2016  jeffj:i think that's a bit extreme
Thu Dec 1 21:14:51 2016  jeffj:you and i played iwth it a bit on one of mine though, and reading a 7g partition filled up a 4G eden space, i think
Thu Dec 1 21:15:09 2016  driftx:I don't recall exactly how big your partition was, but I distinctly remember 4G new wasn't enough, and 5G was
Thu Dec 1 21:15:20 2016  jeffj:it was in the 7-8g range
Thu Dec 1 21:15:30 2016  urandom:we have a few of those
Thu Dec 1 21:15:37 2016  urandom:just short of 10
Thu Dec 1 21:15:51 2016  driftx:I guess it depends mostly on how the partition is comprised, though
Thu Dec 1 21:16:06 2016  urandom:most of our top 50 (according to warnings in the logs) are dominated by partitions in the 1G to 2G range
Thu Dec 1 21:16:11 2016  jeffj:our first attempt at mitigating it when we thought 9754 would be quick was to bump up eden and pray that only one concurrent read came in for any of those ridiculous partitions at a time.
Thu Dec 1 21:16:24 2016  urandom:ha!
Thu Dec 1 21:16:32 2016  urandom:jeffj: you make me feel better
Thu Dec 1 21:16:40 2016  driftx:it actually worked, for a bit
Thu Dec 1 21:16:47 2016  urandom:i am not alone.
Thu Dec 1 21:16:54 2016  jeffj:we all have the same problems.
Thu Dec 1 21:16:59 2016  jeffj:i think all of us are in the same boat
Thu Dec 1 21:17:14 2016  urandom:with a hungry tiger
Thu Dec 1 21:17:18 2016  driftx:but, that was also CMS, urandom runs G1 with explicit region size, so nothing to do there
Thu Dec 1 21:17:42 2016  jeffj:still beats running on 0.8
Thu Dec 1 21:17:43 2016  urandom:loves G1
Thu Dec 1 21:18:00 2016  jeffj:on the plus side, i'm not changing xml files to add new tables anymore
Thu Dec 1 21:18:07 2016  urandom:ha!
Thu Dec 1 21:18:17 2016  jeffj:and having two secondary indexes with the same name doesn't irreparably destroy schema forever
Thu Dec 1 21:18:23 2016  jeffj:so we've got that going for us
Thu Dec 1 21:18:30 2016  urandom:small favors
Thu Dec 1 21:18:49 2016  driftx:you mean 0.6? That's when we had xml last
Thu Dec 1 21:18:56 2016  jeffj:it's all a blur
Thu Dec 1 21:19:08 2016  urandom:0.6 was pretty solid tho
Thu Dec 1 21:19:28 2016  urandom:compared with some of the releases that followed
Thu Dec 1 21:22:02 2016  driftx:0.7 was the break everything release (compat-wise)
Thu Dec 1 21:22:24 2016  urandom:yeah, 0.7 wasn't exactly popular w/ users
Thu Dec 1 21:22:59 2016  driftx:I know of like, two people who pulled 0.6->0.7 off without downtime
Thu Dec 1 21:32:05 2016  Vijay:Joined the channel
Thu Dec 1 21:34:08 2016  minimarcel_:Joined the channel
Thu Dec 1 21:49:03 2016  Vijay:Joined the channel
Thu Dec 1 22:04:52 2016  driftx:exlt: time to cut the last gasp release of 2.1.17?
Thu Dec 1 22:05:10 2016  dikang:Joined the channel
Thu Dec 1 22:05:12 2016  driftx:I believe we're officially done with 2.1 now, so
Thu Dec 1 22:05:19 2016  exlt:ah, that's right
Thu Dec 1 22:05:46 2016  exlt:2 commits since 2.1.16 :)
Thu Dec 1 22:05:53 2016  driftx:yeah, heh
Thu Dec 1 22:05:53 2016  iamaleksey:don't we still do critical fixes for it?
Thu Dec 1 22:05:57 2016  iamaleksey:as in CRITICAL
Thu Dec 1 22:06:02 2016  exlt:we've been doing them
Thu Dec 1 22:06:04 2016  driftx:I thought that stopped after nov
Thu Dec 1 22:06:14 2016  iamaleksey:2 probs not enough to warrant a release. maybe is
Thu Dec 1 22:06:49 2016  driftx:well if we're out of the timeframe when we said we'd maintain, might as well release what we have
Thu Dec 1 22:07:18 2016  driftx:I say if, because it's hard to find the email where that happened
Thu Dec 1 22:07:21 2016  zaller:Joined the channel
Thu Dec 1 22:08:14 2016  iamaleksey:we didn't stop maintaining it, but we made our definition of critical stricter for 2.1, progressively
Thu Dec 1 22:09:09 2016  exlt:theoretically, 2.1 is supposed to be EOL now, and 2.2 goes to critical-fix-only?
Thu Dec 1 22:09:19 2016  jeffj:i would hope that if something showed up for 2.1 in 6 months that was like "omg corruption", we'd still be willing to merge it
Thu Dec 1 22:09:31 2016  Vijay:Joined the channel
Thu Dec 1 22:09:46 2016  jeffj:other dbs give 3-5 years before EOL. i'd love to see us try to go in that direction, just with very very strict definitions of critical
Thu Dec 1 22:09:49 2016  iamaleksey:see, nobody remembers what's supposed to be what anymore
Thu Dec 1 22:12:05 2016  exlt:well, the concept of LTS branches, etc. is under discussion - I'm happy to say we shouldn't EOL right now, until that discussion is fleshed out
Thu Dec 1 22:12:21 2016  jeffj:^
Thu Dec 1 22:12:33 2016  jeffj:+1 to that. let's not EOL anything until we figure out release strategy going forward
Thu Dec 1 22:13:15 2016  driftx:I'm not wrong, though: http://cassandra.apache.org/download/
Thu Dec 1 22:13:50 2016  driftx:hell, going by that 2.2 is EOL too
Thu Dec 1 22:14:02 2016  exlt:correct, but who wrote that and under what plan?
Thu Dec 1 22:14:09 2016  jeffj:"Apache Cassandra 3.0 is supported until May 2017"
Thu Dec 1 22:14:09 2016  exlt:I don't remember
Thu Dec 1 22:14:22 2016  driftx:svn blame or whatever it is in svn it
Thu Dec 1 22:15:05 2016  exlt:I'm probably to blame for derailing some of that plan with the last few 3.X releases ;)
Thu Dec 1 22:16:21 2016  exlt:hrumpf.. new site commit is the earliest on that page
Thu Dec 1 22:16:34 2016  driftx:I'm fine with LTS ftr, I just remembered that nov 2016 was a thing.
Thu Dec 1 22:17:00 2016  driftx:swear there was a ML thread, but damn if I can find it
Thu Dec 1 22:19:31 2016  driftx:found it, "Cassandra 2.2, 3.0, and beyond" from june 10th from jbellis
Thu Dec 1 22:19:58 2016  exlt:rings a bell, now, yeah
Thu Dec 1 22:20:36 2016  driftx:6/10/2015, I should say
Thu Dec 1 22:22:37 2016  exlt:since it wasn't in my culled local mail.. http://www.mail-archive.com/user@cassandra.apache.org/msg42737.html
Thu Dec 1 22:25:20 2016  driftx:patty killed it last night, 23 pts
Thu Dec 1 22:25:27 2016  driftx:err, wrong room
Thu Dec 1 22:27:59 2016  zaller:Joined the channel
Thu Dec 1 22:30:36 2016  exlt:(it was c-1.1 branch that had a few extra commits past the last release of 1.1.12)
Thu Dec 1 22:30:58 2016  exlt:so reading that mail again, with the release of 3.0.0, 2.1 should have been shuttered
Thu Dec 1 22:31:31 2016  exlt:not sure where the Nov date came from, still :)
Thu Dec 1 22:34:48 2016  driftx:initial site commit was sylvain?
Thu Dec 1 22:35:07 2016  exlt:yeah, and it does contain the November and May dates
Thu Dec 1 22:35:32 2016  exlt:so I assume those came from whatever the previous site download page had
Thu Dec 1 22:36:20 2016  driftx:pcmanus: any light you can shed here?
Thu Dec 1 22:36:50 2016  driftx:I don't expect him to reply until tomorrow, but at least the ball is rolling.
Thu Dec 1 22:40:53 2016  exlt:r1724704 | jbellis | 2016-01-14 16:26:19 -0600 (Thu, 14 Jan 2016) | 1 line
Thu Dec 1 22:40:56 2016  exlt:add EOL info and consolidate older releases into a single section
Thu Dec 1 22:41:11 2016  rustyrazorblade:hola driftx, exlt , jeffj , iamaleksey
Thu Dec 1 22:41:21 2016  exlt:\o
Thu Dec 1 22:41:25 2016  driftx:sup rustyrazorblade
Thu Dec 1 22:45:16 2016  iamaleksey:sup sup
Thu Dec 1 22:47:17 2016  exlt:boom. https://lists.apache.org/thread.html/ac5e73ca435211dd9e6813cbc50ab9a575c41cd6e27acc5c61985923@1452482981@%3Cdev.cassandra.apache.org%3E
Thu Dec 1 22:48:10 2016  exlt:"2.1.x: supported with critical fixes only until 4.0 is released, projected
Thu Dec 1 22:48:13 2016  exlt:in November 2016"
Thu Dec 1 22:48:24 2016  exlt:so that was a projected date we haven't hit
Thu Dec 1 22:48:59 2016  exlt:day or two later jonathan committed those dates to svn
Thu Dec 1 22:49:10 2016  driftx:ah, with enough digging, we can finally remember ourselves.
Thu Dec 1 22:49:34 2016  exlt:I take it back - same day :)
Thu Dec 1 22:50:38 2016  exlt:so.. what should they say, now? :)
Thu Dec 1 22:51:09 2016  TvdW:well, I'd recommend against EOL'ing 2.2 until 3.0 is ready for people to use in production
Thu Dec 1 22:51:46 2016  TvdW:(as in, formally ready, with people willing to use it)
Thu Dec 1 22:52:20 2016  dikang:Joined the channel
Thu Dec 1 22:54:43 2016  driftx:+1
Thu Dec 1 22:56:03 2016  iamaleksey:people do use 3.0.x tho
Thu Dec 1 22:56:16 2016  iamaleksey:3.x, not as much
Thu Dec 1 22:57:50 2016  rustyrazorblade:how is everyone today?
Thu Dec 1 23:00:27 2016  jeffj:3.0 is probably ready for MOST common production use cases (people with 10-30 nodes)
Thu Dec 1 23:00:49 2016  jeffj:shouldnt call that most common, i dont know what most common really is
Thu Dec 1 23:01:07 2016  TvdW:jeffj: I have seen a lot of single-node setups.
Thu Dec 1 23:01:20 2016  jeffj:rf=n=3 is probably super common
Thu Dec 1 23:06:13 2016  driftx:single node seem a bit silly
Thu Dec 1 23:06:19 2016  driftx:*seems
Thu Dec 1 23:06:39 2016  TvdW:not really, it's great for prototyping things and by the time it goes live you're still on the old setup :)
Thu Dec 1 23:06:47 2016  TvdW:"I'll add more nodes when I need them"
Thu Dec 1 23:07:40 2016  TvdW:also for embedded databases. I've seen Cassandra embedded in some software, it would just sit in the background unless you told the software to not use the embedded one
Thu Dec 1 23:08:21 2016  TvdW:("let's embed Cassandra instead of sqlite")
Thu Dec 1 23:08:37 2016  rustyrazorblade:if you can use 1 node, you can use postgres
Thu Dec 1 23:08:57 2016  TvdW:if you can use 1 node, that doesn't say anything about a year from now
Thu Dec 1 23:09:13 2016  jeffj:i feel like i just shared this story
Thu Dec 1 23:09:22 2016  jeffj:but there exists a security appliance product that has cassandra in it
Thu Dec 1 23:09:30 2016  jeffj:90% of the time it's single node
Thu Dec 1 23:09:34 2016  jeffj:10% of the time it's clustered
Thu Dec 1 23:09:38 2016  jeffj:cassandra on all of it just in case
Thu Dec 1 23:11:38 2016  driftx:it's 2016, you can't afford two more nodes to have fault tolerance?
Thu Dec 1 23:11:47 2016  driftx:not talking about the appliance, that's fine
Thu Dec 1 23:13:23 2016  TvdW:I don't think some of these cases are about money, it just doesn't make sense to put cassandra on two nodes if you run it on the one server you have that is also your webserver
Thu Dec 1 23:37:59 2016  Vijay:Joined the channel
Thu Dec 1 23:49:01 2016  dikang:Joined the channel

Comments