I've considered that but I'm not a big fan of mumble and even if we were to switch, whose to say the new software isn't affected by TS anyway?
Afaik, they all will run off the very same networks so if TS is being affected by a large scale ddos attack my guess is all software will be.
Maybe someone here knows more about this stuff at a Network Level who can give us more insight. I do know that at a Webserver Level anyway, that if one site is being attacked all sites on my box suffer.
During the weeknights we often have anywhere from 6 - 16 iRacers in Channel along with Fireblade's Europa group that's about equal in size.
Last night it appeared as if every new user connecting to the Europa Channel was doing so by kicking an iRacer. We dropped one at a time until only Jeff was left in channel; the rest of us could never re-connect and if so, just briefly.
Earlier in the week we were all kicked, re-connected, kicked-reconnected (several times) over an hour or so session.
We're starting to see more and more instances of the robot in Channel as packet loss becomes more prominent as well.
...basically all the tell tale signs we're accustomed to that the service is sucking.
Hi Ken, I spoke with the data center about the issue as I had more network data to give them. For some reason, our configuration profile had been changed from essentially "voice applications" to "website(s)". That being said, if there was a lot of UDP traffic coming from a particular user(s), the filter would block it and thus that user would have connection issues. We had the profile changed tonight about an hour ago and I am hoping that will resolve the recent issues you have been experiencing. It has resolved it for our other customers that reported an issue so I'm hoping it does the trick for you as well. (We haven't had 1 issue in this location since we moved there about 4-5 months ago =x)
Let me guess, our 00ber ddos protection failed or maybe we're just playing with the switch just to piss you off. Our servers say 87 days uptime but that doesn't mean we've been connected to anything other than your wallet.
We're laughing all the way to the bank you stupid Canadian fuck.
Is there really no log or anything that really proves the disconnect? An entire list of all the disconnects would be really nice to shove in their face, I'd think. But it sounds as if the server is not really even registering any down time.
Yes, they have that log.
It's not traceable on the server itself, as that never went down.
But some of their network components should have registered a peak or drop in traffic right at the moment our server (most likely all servers) was unreachable.
Client logs could be send, but u can't really see where the error is.
Most likely the reason behind the downtime is a kiddie that got beaten at some game or even kicked and starts attacking that specific ip as retalliation (sp?)
What he doesn't know or does but ignores it... is the fact that he takes down everything making use of the same connection.
We are working on a solution for these issues.
One of them would be to host it somewhere that doesn't have 300 other voice servers in their network.
I can't see how you'd have DDoS protection unless they had a completely separate server that would go up when theirs went down. There really is no protection against an attack like that, unfortunately.
And considering the server uptime hasn't changed, it obviously didn't move the traffic to a new, DDoS-free server. Do they say what their DDoS protection encompasses?
ddos protection is farmed out to a 3rd party and funnels the incoming requests to the main server (the first weak link in their chain IMHO):
Main Voice Tech said:
As you may be aware, there was a period of time for about 4 weeks that you, or your users, may have experienced random network disconnections and/or severe latency. These network issues were being caused by the DDoS mitigation equipment that is run by a 3rd party vendor to scrub incoming traffic. Essentially, it was creating false positives and dropping good IPs from the network and placing temporary blocks and/or throttling traffic causing severe packet loss. Our provider worked with the vendor to try and resolve the issue for a few weeks which resulted in some stabilization, but not to an acceptable level.
On April 5th, 2015 our provider made the decision to switch over to another supplier and that change was made on April 6th, 2015. Since that time, we have had no known issues and no customer complaints relating to the network in the effected locations. We are now fully confident that the migration to the new vendor was a success and the problems have been resolved!
We would like to thank you again for all of your patience and continued support during this time. We sincerely apologize for the inconvenience this has caused and if you need anything or have any questions, please do not hesitate to open a support ticket and a staff member will be happy to assist.