Logs for jdev
[00:06:33] * jack.bates left the chat.
[00:10:18] * Jack Bates joined the chat.
[00:10:18] * Jack Bates left the chat.
[00:10:18] * Jack Bates joined the chat.
[00:10:18] * Jack Bates left the chat.
[00:13:59] * Jack Bates joined the chat.
[00:15:01] * darkrain_ left the chat.
[01:23:09] * Florob left the chat.
[01:44:33] * marseille left the chat.
[01:44:52] * marseille joined the chat.
[01:47:26] * Jack Bates in now known as jack.bates.
[01:47:26] * jack.bates left the chat.
[01:47:26] * jack.bates joined the chat.
[01:47:26] * jack.bates left the chat.
[01:47:26] * jack.bates joined the chat.
[01:47:26] * jack.bates left the chat.
[01:49:42] * jack.bates joined the chat.
[02:23:19] * jameschurchman left the chat.
[02:28:20] * Lance joined the chat.
[03:20:04] * jack.bates left the chat.
[03:20:09] * jack.bates joined the chat.
[03:20:10] * jack.bates left the chat.
[03:20:10] * jack.bates joined the chat.
[03:20:10] * jack.bates left the chat.
[03:20:10] * jack.bates joined the chat.
[03:20:10] * jack.bates left the chat.
[03:21:15] * jack.bates joined the chat.
[03:30:24] * marseille left the chat.
[03:58:16] * Treebilou left the chat.
[03:59:02] * Treebilou joined the chat.
[04:10:42] * Lance left the chat.
[04:10:52] * Lance joined the chat.
[04:22:46] * Treebilou left the chat.
[04:23:24] * Lance left the chat.
[04:37:51] * scippio left the chat.
[04:44:48] * deryni left the chat.
[06:08:10] * teo left the chat.
[06:08:10] * teo joined the chat.
[06:13:52] * jack.bates left the chat.
[06:24:12] * Tobias joined the chat.
[06:27:04] * Alex joined the chat.
[06:28:31] * lastsky joined the chat.
[06:31:52] * deryni joined the chat.
[06:33:02] * Asterix joined the chat.
[07:07:25] * Asterix left the chat.
[07:26:29] * dax joined the chat.
[07:27:15] * tkoski joined the chat.
[07:27:25] * tkoski left the chat.
[07:28:43] * luca tagliaferri joined the chat.
[07:40:33] * rtreffer joined the chat.
[07:52:39] * Dave joined the chat.
[07:53:46] * Dave left the chat.
[07:58:28] * tong joined the chat.
[07:59:45] * rtreffer left the chat.
[08:32:03] * petermount joined the chat.
[08:35:28] * rtreffer joined the chat.
[08:36:29] * guus joined the chat.
[08:38:47] * hanzz joined the chat.
[08:39:05] <hanzz> hm, no history here :-/
[08:43:04] * Davey Boy joined the chat.
[08:46:18] * Davey Boy left the chat.
[08:53:05] * aholler joined the chat.
[08:58:48] * Alex left the chat.
[08:59:02] * Alex joined the chat.
[09:23:27] * alkino joined the chat.
[10:06:24] * scippio joined the chat.
[10:48:04] * xnyhps left the chat.
[10:48:14] * xnyhps joined the chat.
[11:23:01] * xnyhps left the chat.
[11:27:52] * xnyhps joined the chat.
[11:30:30] * hanzz left the chat.
[11:35:42] * xnyhps left the chat.
[11:35:52] * xnyhps joined the chat.
[11:38:46] * Treebilou joined the chat.
[11:39:53] * Treebilou left the chat.
[11:39:55] * Treebilou joined the chat.
[11:49:29] * alkino left the chat.
[12:03:04] * teo left the chat.
[12:23:15] * teo joined the chat.
[12:25:59] * louiz’ left the chat.
[12:27:37] * louiz’ joined the chat.
[12:32:18] <Hermitifier> The limit is full taken up by chatstate messages :/
[12:33:05] <louiz’> yeah, the server should fix that :/
[12:35:01] * nielsvh joined the chat.
[12:44:32] <Tobias> and the clients too
[12:53:15] <louiz’> The clients respect the XEP.
[12:53:24] <louiz’> If that’s a problem, the XEP should be fixed.
[12:53:44] <louiz’> But I think only the servers should be fixed and keep only messages with a <body/> in the history.
[13:02:39] * gigam left the chat.
[13:03:55] <McKael> louiz’: are you saying the servers don't respect the XEP?
[13:04:18] <McKael> louiz’: it just sounds like it the same argument would stand for both side :)
[13:04:26] <Link Mauve> No, just that if it is inconvenient to put chatstates in the logs, the server should be fixed.
[13:04:34] <Link Mauve> And it’s not a big fix.
[13:04:46] <louiz’> McKael, no, the XEP is just unclear about what server should keep in the history.
[13:09:38] * Zash joined the chat.
[13:22:12] * naw joined the chat.
[13:24:54] <Tobias> louiz’, is the XEP unclear about sending chat-states to clients not supporting it or to chatrooms?
[13:25:28] <louiz’> The XEP is just unclear about what server should store in the history
[13:25:48] <louiz’> It’s clear on the points you’re mentionning
[13:26:11] <Tobias> louiz’, and? it says it should send chat states to chatrooms and clients not supporting it?
[13:26:18] <louiz’> yes
[13:26:30] <louiz’> well, no, only to chatrooms
[13:26:36] * lastsky left the chat.
[13:27:04] <louiz’> They should not be sent (appart from the first one) to clients no supporting it or not willing to use them ATM.
[13:29:54] * Treebilou left the chat.
[13:38:16] * rtreffer left the chat.
[13:40:31] * naw left the chat.
[13:54:36] * Florob joined the chat.
[13:58:13] * rtreffer joined the chat.
[14:00:32] * xnyhps left the chat.
[14:00:33] * xnyhps joined the chat.
[14:01:14] * Alex left the chat.
[14:01:39] * xnyhps left the chat.
[14:01:40] * xnyhps joined the chat.
[14:02:54] * guus left the chat.
[14:04:16] * Jack Bates joined the chat.
[14:04:17] * Jack Bates left the chat.
[14:04:17] * Jack Bates joined the chat.
[14:04:17] * Jack Bates left the chat.
[14:05:01] * Jack Bates joined the chat.
[14:05:01] * Jack Bates left the chat.
[14:05:28] * Jack Bates joined the chat.
[14:21:26] <McKael> Then, should the MUC server filter them for clients not supporting them?
[14:22:29] <louiz’> possibly
[14:22:44] <louiz’> but that’s not the issue that we were talking about
[14:22:56] <louiz’> that was about the history being filled with chatstates
[14:23:13] <McKael> Hmm. I think this issue would apply to other stuff/extensions as well
[14:24:54] <McKael> louiz’: I know, Tobias' question got me thinking about how it could work
[14:25:32] <McKael> but this isn't specific to chat states at all
[14:27:16] <louiz’> no
[14:27:27] <louiz’> that’s why history should just keep things with a body.
[14:29:35] * xnyhps left the chat.
[14:29:37] * xnyhps joined the chat.
[14:32:56] <Zash> If it's a normal chat room with normal people, then yes. But that may not allways be the case.
[14:36:52] * Treebilou joined the chat.
[14:40:14] * Florob left the chat.
[14:42:36] * xnyhps left the chat.
[14:42:36] * xnyhps joined the chat.
[14:45:02] * darkrain_ joined the chat.
[14:45:13] <aholler> hmm, i like a history which also includes presences ;)
[14:46:27] <Link Mauve> They are not keeped in the history.
[14:46:37] <Link Mauve> They are sent at the connection.
[14:46:52] <Link Mauve> join*
[14:47:45] <aholler> keeping them in the history (too) isn't a problem and makes the history more complete. But I know it isn't standard. ;)
[14:50:15] <Zash> infinite history of all stanzas :)
[14:50:48] <Zash> combined with <history since=timestapm/> could be used as a journal
[14:55:56] <McKael> Zash: <history since=timestamp1 to=timestamp2/> ? :)
[14:56:28] <Zash> to is always now
[15:01:15] * dax left the chat.
[15:11:40] <Link Mauve> aholler, I think chatstates could be stored with the presences, so when a new person joins the room, he/she would have the
exact state of each participants.
[15:12:29] <Link Mauve> Of course, only the ones that sent a chatstate since their join.
[15:15:18] * Asterix joined the chat.
[15:46:17] * alkino joined the chat.
[15:57:05] * deryni left the chat.
[15:57:35] * gnufs joined the chat.
[15:57:56] * gnufs left the chat.
[16:39:53] * pombreda joined the chat.
[16:42:43] * alkino left the chat.
[16:44:10] * deryni joined the chat.
[16:45:20] * thkoch2001 joined the chat.
[16:46:54] * marseille joined the chat.
[17:02:12] * petermount left the chat.
[17:13:54] * deryni left the chat.
[17:19:53] * Florob joined the chat.
[17:21:18] * luca tagliaferri left the chat.
[17:22:56] * Neustradamus left the chat.
[17:50:18] * deryni joined the chat.
[17:51:48] * Neustradamus joined the chat.
[17:52:30] <aholler> the problem with presences in history is that clients must have to distinct between a presence from the history and the stuff
send through the presence broadcast.
[17:54:01] <deryni> Presence broadcast ends with the user's own presence and only then can history replay start.
[17:57:42] <aholler> yes, but you have to remove any status from presence in history, otherwise some clients are getting confused (those which
aren't checking the code).
[18:00:58] <deryni> If you care about not breaking broken clients, sure.
[18:10:09] * waqas joined the chat.
[18:10:43] * naw joined the chat.
[18:14:34] * tong left the chat.
[18:15:16] * tong joined the chat.
[18:16:08] * tong left the chat.
[18:18:14] * xnyhps left the chat.
[18:18:23] * xnyhps joined the chat.
[18:48:59] * stpeter joined the chat.
[19:03:35] * Lance joined the chat.
[19:08:39] * Lance left the chat.
[19:23:55] * thkoch2001 left the chat.
[19:28:25] * deryni left the chat.
[19:28:25] * deryni joined the chat.
[19:32:25] * naw left the chat.
[19:35:58] * pombreda left the chat.
[19:37:30] <Tobias> stpeter, hi
[19:37:51] * Alex joined the chat.
[19:42:00] * pombreda joined the chat.
[19:43:33] * rtreffer left the chat.
[19:43:38] * rtreffer joined the chat.
[19:54:59] * rtreffer left the chat.
[19:55:39] * rtreffer joined the chat.
[19:55:48] <stpeter> hi Tobias, can I help you? :)
[19:56:35] <Tobias> yeah, with a section on S5B proxy usage in XEP-0261 :)
[19:57:04] <Tobias> a small example or so...the IQ activate to send, to the proxy, when to send it
[19:57:40] <Tobias> talked with marcus a bit about it and implementing it and crossing fingers that'll work somehow
[19:57:58] <stpeter> ah
[19:58:10] <stpeter> that's not covered by XEP-0065?
[19:58:32] <stpeter> I'm happy to add a note about it to XEP-0261
[19:58:36] <Tobias> http://xmpp.org/extensions/xep-0065.html#mediated-flow
[20:00:33] <Tobias> so 1. i connect to the proxy do the S5B AUTH, S5B CONNECT, 2. send session-initiate over with a candidate for that proxy,
3. the candidate gets chosen, 4. i send IQ activate to the proxy, 5. send the transport-info to the other party and at last
6. can start transferring data
[20:00:43] <Tobias> that's how i understood it now
[20:00:46] <stpeter> hmm, yeah, we should clarify that in http://xmpp.org/extensions/xep-0260.html#complete
[20:00:48] <stpeter> on the phone
[20:03:23] <aholler> btw. if a s5b-proxy isn
[20:03:56] <aholler> 't an extra domain, psi tries forever to check services
[20:04:42] <Tobias> heh
[20:05:19] <aholler> maybe i should be noted in xep-0030 that identities don't have to be located on a separate domain
[20:05:45] <aholler> i'm sure it is, but mabye not that clearly
[20:10:15] <aholler> s/identities/services/
[20:10:26] <stpeter> I'll be on the phone here for a while
[20:10:30] <stpeter> just FYI :)
[20:22:06] * aRyo joined the chat.
[20:28:40] * rtreffer left the chat.
[20:28:50] * nielsvh left the chat.
[20:28:57] * rtreffer joined the chat.
[20:34:01] * waqas left the chat.
[20:54:42] * Asterix left the chat.
[20:55:12] <stpeter> ok, I'm off the phone, I have a few minutes before my next meeting :)
[20:55:21] <Tobias> tight schedule
[20:55:33] <stpeter> yeah I'm a popular person today
[20:55:38] <stpeter> tomorrow, who knows?
[20:55:39] <Tobias> poor you
[20:55:57] <stpeter> /me scrolls up
[20:56:35] <stpeter> "so 1. i connect to the proxy do the S5B AUTH, S5B CONNECT, 2. send session-initiate over with a candidate for that proxy,
3. the candidate gets chosen, 4. i send IQ activate to the proxy, 5. send the transport-info to the other party and at last
6. can start transferring data" -- yes, that is how I understand it too
[20:56:41] <stpeter> do we need to make that clearer in the spec?
[20:57:01] <stpeter> I think it would be good to document the activate message
[20:57:05] <stpeter> so I will add that
[20:59:28] <Tobias> yup..that S5B activate to the proxy would be nice to have in there...sure it's in XEP-0065 however XEP-0065 only applies in
small bits to XEP-0261 i think. XEP-0065 described protocol flows differ quite a lot from XEP-0261
[21:00:22] <Tobias> it's roughly the same but different stanzas, more logic, etc.
[21:01:08] * deryni left the chat.
[21:01:10] * Lance joined the chat.
[21:02:59] * rtreffer left the chat.
[21:13:15] <aholler> hmm, I'm confused. what has xep-0261 to do with s5b-proxies?
[21:14:34] <aholler> don't you mean 0260?
[21:15:39] <stpeter> yes 260
[21:15:55] <stpeter> in my other meeting now
[21:16:28] <Tobias> aholler, yeah..sry..meant 260
[21:20:36] <Tobias> aholler, so on what xmpp related are you coding on recently?
[21:29:22] <stpeter> Tobias: meeting finished :)
[21:30:10] <stpeter> /me fixes up XEP-0260 while it's top of mind
[21:30:39] <aholler> recently muc, pupsub and pep ;)
[21:32:53] <Tobias> aholler, developing what? server? client? some custom specialized software?
[21:33:17] <aholler> server, happy when I'm finished with that stuff (almost done)
[21:33:36] <stpeter> aholler: what language are you coding in?
[21:33:44] <aholler> c++0x
[21:34:10] <Tobias> nice :)
[21:35:26] <Tobias> Swift(-en) is still at C++03 since they want to support some oldish compilers or distros or so
[21:35:37] * Lance left the chat.
[21:36:29] <Tobias> aholler, know of some kind of precompiler that dumbs c++0x down to c++03 for compilers not supporting it? :)
[21:37:08] <aholler> c++0x or c++11 doesn't bring much advantages for clients. but for (high-traffic) servers it could mean some great speed improvements
if used
[21:37:33] <aholler> Tobias: no, not possible
[21:37:36] <Tobias> aholler, it brings quite a lot, usability :) having to type less and so
[21:37:44] <aholler> thats too
[21:39:40] <aholler> auto is quit nice to use ;)
[21:41:23] <Tobias> i can sure believe that...but even without 0x..coding with clang is way more joy that it's used to be with gcc when i started
with c++
[21:43:46] * pombreda left the chat.
[21:44:52] <aholler> why?
[21:46:13] <Tobias> faster compilation, more useful error messages, misspelling detection, etc.
[21:57:50] * Florob left the chat.
[22:03:08] <stpeter> Tobias: I just updated XEP-0260
[22:03:36] <stpeter> brb
[22:04:30] <Tobias> okay..will give the changes a read later
[22:10:18] <stpeter> thanks
[22:11:56] <Tobias> hope we can move them soon to draft...so that more adoption can kick in
[22:12:12] <Tobias> but i guess we have to wake the dead council for this to happen ;)
[22:12:28] <stpeter> I think we're only waiting for Fritzy, but I'm so busy that I don't remember the Council voting status
[22:12:33] <stpeter> I can check the minutes, I'm sure
[22:12:41] <Tobias> nah..not that important
[22:13:29] <aholler> btw. does someone know a xmpp-benchmark?
[22:13:49] <Tobias> aholler, what do you want to benchmark?
[22:14:11] <aholler> e.g. stanza turn-around times
[22:14:39] * whatever left the chat.
[22:14:54] <Tobias> i once had some tools, memory-usage per connected client and stanza throughput...however based on gloox, which one had to
compile for yourself
[22:15:26] <Tobias> swift has also a small bench tool for connection some clients to a server
[22:18:32] * deryni joined the chat.
[22:19:14] <Tobias> aholler, running those data through gnuplot you'd get graphs like this: http://ayena.de/files/lxmppd/logging/prosody_610_stanza_wo_logging_chart_localhost_1500.png
[22:19:40] <Tobias> it was basically sending stanzas to itself
[22:20:23] <aholler> hmm, some clients isn't enough. e.g. jabsimul is too slow to be usefull. ;)
[22:20:47] * whatever joined the chat.
[22:21:39] <Tobias> there's also Tsung
[22:21:45] <Tobias> but i've never used that
[22:22:04] <stpeter> ok, I think I'm done working for now, time for a bit of a break -- ttyl :)
[22:22:14] * stpeter left the chat.
[22:22:15] <Tobias> cya
[22:23:32] * Alex left the chat.
[22:26:12] * eris0xff joined the chat.
[22:26:12] * eris0xff left the chat.
[22:26:24] * eris0xff joined the chat.
[22:26:24] * eris0xff left the chat.
[22:30:17] * eris0xff joined the chat.
[22:36:27] * eris0xff left the chat.
[22:41:28] * eris0xff joined the chat.
[22:44:29] <eris0xff> hi
[22:52:15] <eris0xff> I'm thinking about using XMPP to as a routing protocol for log events. log4net/log4j listener that publishes log events
to a pubsub node based on app name or other criteria. a plugin could listen on a collection of pubsub nodes and forward that
traffic through an encrypted connection either to rsyslog backend or a logging service that accepts traffic via xmpp. users
could monitor log events in realtime by subscribing to a node. a MUC owner could subscribe the room to a node so that all
users could monitor the traffic. possibly a bot could modify log4j config files to source quench incoming traffic, change
filter etc.
[22:53:27] <eris0xff> would that work? thinking of various issues such as access control, performance etc.
[22:53:40] <aholler> it should
[22:54:02] <eris0xff> it would be pretty sweet I'm thinking.
[22:55:34] * rtreffer joined the chat.
[22:55:44] <eris0xff> other issues are making sure that events aren't dropped, collectors on DMZs aggregating traffic to push to something like
ejabberd
[22:56:01] <eris0xff> stream encryption / TLS etc
[22:58:15] <eris0xff> Also I've seen some performance figures like 10k messages per second per core handled by well tuned servers. Since I'd be
using XMPP to wrap log events, max msgs / second becomes critical.
[23:01:06] <eris0xff> Anyway XMPP features dovetail rather well for log eventing if the performance isn't an issue. A standard event format like
CEE/XML could be included directly in the XMPP message and would be generally ignored by your basic client.
[23:01:13] <eris0xff> (but routed correctly)
[23:01:50] <aholler> where we are at question for a benchmark again ;)
[23:02:09] <eris0xff> sorry. was that for me?
[23:02:28] <aholler> Tobias: which tool do you have used for that benchmark?
[23:02:35] <aholler> eris0xff: yes
[23:02:39] <eris0xff> thx
[23:03:36] <eris0xff> The 10k msgs/s per core was something that I got from hunting around on the lists and confirmed by at least one server dev
(tigase)
[23:03:40] <Tobias> aholler, one i wrote myself
[23:04:19] * evilotto left the chat.
[23:04:49] <Tobias> aholler, i can upload the sources if you're interested
[23:05:15] <eris0xff> Tobias: Is there a recent benchmark of the generally available servers?
[23:06:09] <aholler> I don't think so.
[23:06:24] <Tobias> i don't know of any
[23:06:34] <eris0xff> Would be interesting
[23:06:52] <Tobias> depends all on what you want to benchmark
[23:06:59] <eris0xff> (and probably challenging)
[23:07:12] <Tobias> a lot people have different needs/priorities
[23:07:12] <aholler> exactly, benchmarking is a challenge
[23:07:16] <eris0xff> sure
[23:07:54] * notKev joined the chat.
[23:08:07] <aholler> e.g. to really benchmark you would have to send and receive without interpreting the stuff (on the fly).
[23:08:20] <eris0xff> I was thinking of trying an optimized ejabberd first (with compiled msg routing)
[23:08:29] <eris0xff> yes.
[23:09:10] <eris0xff> Wouldn't the server have to at least unwrap the message once to resolve routing?
[23:09:17] <notKev> Define 'unwrap'.
[23:09:37] <notKev> It does need to at least determine the 'to' and 'from', yes.
[23:09:47] <notKev> (And stanza type and 'type' attribute).
[23:09:52] <aholler> yes, but how do you benchmark the server if your benchmarking tool just measures it's own xmpp-parser/networking routines
[23:09:53] <eris0xff> well just simply XML parsing and examining the destination maybe
[23:10:28] <eris0xff> Right -- it would have to be highly optimized for testing
[23:10:41] <notKev> aholler: I have found, while playing around with things like this, that the server performance frequently exceeds the testing
tools.
[23:10:53] <aholler> thats the problem
[23:11:06] <eris0xff> Hmm. I think I could generate sufficient load ;-)
[23:11:19] <notKev> eris0xff: Generating sufficient load is relatively easy.
[23:11:21] <aholler> e.g. trying jabsimul I found that it just measures it's own speed ;)
[23:11:28] <eris0xff> maybe sense of it is not :-)
[23:11:29] <notKev> But you need to generate the load while being XMPP compliant.
[23:11:36] <Tobias> notKev, yup..writing a single process benchmarking tool that keeps tons of client connections to a server open and living
isn't that easy either...especially when TLS comes into play
[23:11:38] <eris0xff> sorry making
[23:11:46] <notKev> And be able to measure your speed while doing so.
[23:12:00] <eris0xff> right -- basically a loopback measurement
[23:12:13] <notKev> Tobias: Right, especially if you start doing something silly like multithreading the client :)
[23:12:39] <aholler> the perfect benchmark-tool would open connections and then just throw out stuff and store the received packets for analyzing
them later
[23:12:51] <notKev> aholler: No, it wouldn't.
[23:12:57] <eris0xff> :-)
[23:12:57] <notKev> aholler: Because that's not XMPP-compliant :)
[23:13:15] <Tobias> well..need some sleep..gn8
[23:13:16] <aholler> you could check the compliance afterwards ;)
[23:13:17] <notKev> (The client is obliged to answer iq stanzas sent to it)
[23:13:27] * Tobias left the chat.
[23:13:27] <notKev> (And if it doesn't do so, is liable to get disconnected from the server)
[23:13:37] <notKev> No, you can't, not if your client gets disconnected from the server.
[23:13:42] <eris0xff> notKev: depends what you're measuring maybe
[23:14:04] <eris0xff> hmm. goodpoint
[23:14:32] <notKev> I missed the first half of this conversation, btw.
[23:14:52] <notKev> I came in for the 10k/core message.
[23:14:56] <eris0xff> I'm thinking about a XMPP infrastructure for event logging / transport
[23:14:57] <aholler> hmm, yes, but for measuring e.g. stanza turn-around-times the benchmark-tool doesn't have to answer requests. but that depends
on what is send
[23:15:05] <notKev> aholler: Yes, it does.
[23:15:15] <notKev> aholler: If the server sends it an iq, it must answer, or the server may disconnect it.
[23:15:42] <aholler> yes, but there aren't many situations where the stuff sends iqs the client has to answer
[23:15:51] <aholler> s/stuff/server/
[23:15:56] <notKev> Admittedly, I don't know of any server that's going to be sending e.g. pings to the client when the client is actively sending
stuff.
[23:16:10] <notKev> But if one of the client connections were to be idle for a while, some servers would send a ping to check it's still there.
[23:16:54] <eris0xff> notKev: So the test client could answer that after it sends a test batch.
[23:17:24] <notKev> If it's not been disconnected by then, yes.
[23:18:27] <notKev> Your use case is at least relatively easy to contruct tests for.
[23:18:34] <notKev> Relative to trying to load test an IM system.
[23:18:39] <eris0xff> right: managed batch sizes so you could answer within a reasonable period of time.
[23:18:55] <eris0xff> right. full testing is really challenging
[23:19:21] <eris0xff> maybe like testing presence updates etc.
[23:19:47] <notKev> Testing IM is very challenging.
[23:20:11] <notKev> You need rosters, you need presence, you need IM, you need MUC, you need PEP nodes, you need PEP updates, etc. etc. etc.
[23:20:15] <eris0xff> So I'm thinking of sending my log events to a pubsub node
[23:20:45] <notKev> If you've got multiple subscribers, that sounds sane.
[23:20:46] <eris0xff> then attach a plugin to that node to stream events to an rloging backend
[23:20:59] <aholler> you could just setup 1 node and subscribe 100.000 clients and than send an item to the node
[23:21:27] <notKev> Hrmm - wouldn't just having a client subscribe to the node be better than writing server plugins?
[23:21:36] <notKev> aholler: Even that's not necessarily straightforward to test.
[23:21:41] <eris0xff> well subscriber could be an event gateway to rlogin or normal users could subscribe or a MUC room could
[23:22:13] <notKev> aholler: As in a 'real' environment those 100,000 clients would be clearing out their network buffers quickly, while if they're
all part of a test application they may well not.
[23:22:26] <aholler> notKev: it would test how long the server needs to process the 100.000 notifications and gives a hint without the need to
setup 100.000 connections
[23:22:29] <eris0xff> I could subscribe to a collection node for a group of application based on any criteria
[23:22:40] <eris0xff> right.
[23:23:19] <notKev> aholler: It tests how long it takes the client to process 100,000 notifications, probably.
[23:23:31] <aholler> no, just leave the clients offline
[23:23:41] <notKev> But then the server's not going to be delivering to them.
[23:23:52] <aholler> it has to (internally)
[23:23:59] <eris0xff> so come up with a /dev/nul dest
[23:24:17] <eris0xff> then you have internal routing speed
[23:24:31] <aholler> thats it, easy internal routing speed
[23:24:48] <notKev> It gives a lower bound, certainly.
[23:24:51] <eris0xff> btw: is there any reason a MUC can't subscribe to a pubsub node?
[23:25:04] <aholler> maybe two servers, on as pubsub-node, the other which receives the notifications (while no client is online)
[23:25:36] <notKev> But how would you know the test was complete?
[23:25:50] <aholler> counting stanzas
[23:25:51] <eris0xff> You route a special finish message
[23:25:58] <eris0xff> with a timestamp?
[23:26:10] <notKev> eris0xff: Conceptually not - but it's not something that's generally done, AFAIK.
[23:26:17] <notKev> aholler: Counting stanzas where/how?
[23:26:54] <aholler> e.g. through stream-management
[23:26:59] <eris0xff> that could only be measure in an internal XML parser, but you should be able to infer it.
[23:27:31] <notKev> I think you're dangerously close to making assumptions about how the server internals work, there.
[23:27:38] <eris0xff> probably
[23:27:49] <aholler> not if two servers are used
[23:28:09] <aholler> the problem is how to check the stanza-count ;)
[23:29:19] <eris0xff> sorry: I missed the need to check stanza count. why?
[23:29:37] <eris0xff> (probably me not understanding a term or something)
[23:29:40] <notKev> aholler's suggesting using two servers.
[23:29:43] <aholler> to know when the notifications are transfered
[23:29:51] <eris0xff> oh
[23:29:58] <notKev> Have server one host the pubsub node, then 100,000 subscriptions @server2.
[23:30:11] <eris0xff> hmm.
[23:30:13] <notKev> Get the stanza count on server 2 to check when 100,000 stanzas have been transfered.
[23:30:26] <notKev> It's a pretty smart way of testing it, as it avoids the null routing problem.
[23:31:33] <notKev> It's not perfect, naturally, but it seems sensible to me.
[23:32:02] <eris0xff> Right, but then you have server-server transport issues. wouldn't that be a chokepoint possibly?
[23:32:09] <notKev> Yes.
[23:32:32] <aholler> thats basically just open one or 2 connections
[23:32:34] <notKev> But if you assume that s2s processing is about as fast as c2s processing.
[23:32:40] <eris0xff> You could minimize that by going through loopback or similar
[23:32:49] <eris0xff> Right.
[23:32:53] <notKev> And you run the same server software with the same spec at each end.
[23:32:56] <eris0xff> Possibly faster depending
[23:33:13] <eris0xff> Multi-core machine with individual loopback addresses.
[23:33:14] <notKev> Then it seems like a least-bad test.
[23:33:18] <eris0xff> heh
[23:33:52] <aholler> maybe using wireshark as benchmark-tool, just save the traffic and look afterwards at the times
[23:34:13] <notKev> Doesn't that then affect transfer rates?
[23:34:46] <notKev> Of course, you could just start with a naive test, and see if that's fast enough.
[23:35:00] <notKev> If the naive test's too slow, you can work out whether it's the server or test that's the problem.
[23:35:06] <notKev> If even the naive test is fast enough, no problems.
[23:35:37] <notKev> Although my experience of naive tests for stuff like this is that they frequently break things.
[23:40:39] <aholler> but 10k message/s which are getting multiplied through pubsub might be challenging
[23:40:53] <aholler> if that is a constant stream
[23:41:21] <notKev> It seems reasonably high-throughput, yes
[23:41:25] <notKev> but a fun application.
[23:41:41] <aholler> I already wanted to test such. ;)
[23:41:49] <aholler> maybe next week ;)
[23:42:07] <notKev> It'd be fun to see how M-Link performs under that load.
[23:42:29] <notKev> Thinking about the internals, it should be able to chuck stuff through in that scenario very fast.
[23:45:20] <aholler> btw. i still don't get a version ;)
[23:46:52] <notKev> Fun. You can ask Peter to file a ticket, or I'll try to remember to look at it in a couple of weeks.
[23:46:55] * aRyo left the chat.
[23:46:57] * aRyo joined the chat.
[23:47:04] <eris0xff> Interesting using pubsub for it.
[23:47:16] <eris0xff> Wasn't even thinking about that at first --- just as a use case.
[23:48:03] <eris0xff> That way you don't need to synthesize msgs to 10k recipients -- just subscribe to the feed. right?
[23:48:16] <eris0xff> (at least to the other server)
[23:49:18] <aholler> yes
[23:49:36] <eris0xff> So the test rig would be one process to feed msgs to the server under test and then that server connected to a fake destination
server via loopback which then counts messages transferred.
[23:50:06] <eris0xff> The fake server creates the subscription requests (all 10k of them :-)
[23:50:42] <aholler> you can use a simple script to generate the 10k subscriptions
[23:50:54] <eris0xff> The fake server would need to be extremely performant :-)
[23:50:59] <notKev> It needs to be a not entirely fake server at the other end.
[23:51:11] <notKev> So I'd suggest using a real one equivalent to the one you're testing.
[23:51:19] <eris0xff> hmm
[23:51:32] <aholler> just use e.g. two ejabberd
[23:51:36] <eris0xff> right
[23:52:15] <eris0xff> you're still implicitly testing the other ejabberd's ability to accept message which is related, but probably not highly relevant
[23:52:49] <eris0xff> message routing should be reflexive (if thats the word I'm looking for)
[23:53:13] <aholler> the receiving server just throws away those messages, but it still has to parse them
[23:53:35] <aholler> so I think it boils down to the parsing speed
[23:53:47] <eris0xff> probably prime the test by running a couple stream loads through it to give any VM issues time to settle down
[23:54:08] <eris0xff> well parsing and lookup
[23:54:16] <eris0xff> needs to associate it
[23:54:30] <eris0xff> which could be very slow if done in a bone headed manner
[23:55:03] <aholler> yes. depends how the server looks up if a client is online.
[23:55:05] <eris0xff> (basically "where is that user")
[23:55:18] <eris0xff> hmm
[23:55:21] <eris0xff> that too
[23:55:38] <eris0xff> it's not local, send it to that server
[23:55:39] <aholler> could end in an request to a db for every msg, but that depends on the server
[23:56:02] <eris0xff> heh. could do a sequential search through an un-indexed db
[23:56:10] <eris0xff> :-) make up a nightmare scenario
[23:56:18] <aholler> ;)
[23:56:49] <eris0xff> obviously some sort of hash is what most server would probably use by default.
[23:56:54] <eris0xff> (maybe cached)
[23:57:14] <aholler> don't know how servers are working, but I've read some complains that pubsub doesn't scale.
[23:57:31] <aholler> don't know what problems those people had
[23:57:41] <eris0xff> well pubsub should scale. some implementations dont
[23:58:02] <aholler> or they just configured the server wrong
[23:59:09] <aholler> e.g. forgotten to disable some shapers or such
[00:00:45] <eris0xff> probably just use some sort of trie algorithm which performs in near O(1)
[00:02:10] <aholler> only the complaining people can answer that. I've just read some complains and wondered what they've done
[00:02:53] <aholler> or maybe ejabberd is really slow. never checked that.
[00:03:59] * scippio left the chat.
[00:04:20] <aholler> don't know how erlang really scales
[00:04:44] * aRyo left the chat.
[00:05:10] * rtreffer left the chat.
[00:05:41] <eris0xff> I've always heard that ejabberd is fairly fast. you can speed it up significantly by telling erlang to compile the message
router (you'd think that would be enabled by default)