How to tell you're doing a good job at enforcing CoCs: white male nazi-dudes complain about the project not being "free software" because they get "censored".
Surprisingly, Mastodon is the first application where I actively regret self-hosting it.
I mean, eating 1.3GB of RAM after a few days of stabilizing is one thing, but also eating 16GB of HDD just for caching avatars and preview_cards is kinda ridiculous. (And I'm not even talking about media attachments - there is a task to purge old attachments, which makes them manageable).
@deadsuperhero i'm here for quite some time, and tried to start interesting discussions with... people in charge. but you know, ... i apparently know nothing so it's not even worth talking to me... *cough* 🙄
thanks for the links, though. will read and see if i can write some emails. :)
“Roses are red, Facebook is blue. Alternative social networks are what we want, but on implementing them… we have no clue”.
Me on Handling data in (alternative) social networks - https://schub.io/blog/2018/08/22/handling-data-in-alternative-social-networks.html
@Gargron What I'm concerned about, at least in diaspora*s case (and yes, I am obviously biased) is that we are trying to work around an actual issue by implementing a workaround that will not scale in the case we become somewhat popular.
I have not yet found a nice solution, but I always try to talk to people who might also have spent some time thinking about that...
@Gargron But it doesn't have to impact UX. Put outbound jobs into it's own queue, make sure they don't consume all available sockets so other jobs can run just well.
We (diaspora, that is) would be able to deliver a post to 100k nodes within ~9 minutes on average server hardware (assuming response times I measured across our actual network), while on the same scenario, we would not be able to handle the same load on a central relay, as even the TLS handshakes would consume more than 4 hours.
@Gargron To be more specific: There is a huge difference between a relay delivering 10 posts from 100k nodes (the relay delivering 1 mil. posts) vs. having each node delivering 10 posts.
You'd have to do a lot of work to get the relay-cluster performant enough to handle that load, while delivering 10 posts within a short period of time is a perfectly reasonable workload, even for small nodes on slow hardware.
@Gargron According to my experiments and tests, it would be actually more efficient to have the nodes themselves do the work. Yes, they will be kinda busy sometimes, but if the nodes are small (which in a nice world, they would be) there wouldn't be much traffic to federate outbound, so this might actually work.
@Gargron This is true in the scale we are currently living in, but this won't hold up in a future where we are "popular".
Opening connections and delivering the payload is much more expensive compared to receiving an item, so eventually, the relays will simply be unable to deliver their backlog. Now, you can scale the relays by throwing a lot of hardware and a lot of bandwidth to them.
fixing the web at @mozilla during the days, doing more open source stuff at night. diaspora* core dev. working on privacy, communication, and other fun stuff.
Just my private Mastodon instance. Move along.