top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Websocket implementation limits message rate?

+1 vote
749 views

I have a websocket server endpoint in Tomcat and a Tyrus 1.7 client. When I try to send text messages from the Tyrus client to Tomcat, it appears that messages get dropped when sent at a rate greater than 1 every ten seconds. Is there configuration that limits the rate of messages from clients? Couldn't find in docs and I don't see it in source, but suspect it might be DOS prevention.

Perhaps this is a Tyrus limit, but please let me know if you know of limits or config in Tomcat.

posted Aug 12, 2014 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button
I cannot comment on the base issue, but it seems that you should be a  bit more specific here, about how you are determining this. Why do you  think / how do you know, that messages are being dropped at the Tomcat level ?
Also there is no such limit on the Tomcat.
We have unit test that sends 1000's of messages per second. There is no such limit on the Tomcat side.

1 Answer

0 votes

It seems to me far more likely that this would be a limit in the client, because there is a practical limit to how fast a client would generated messages in a real-world situation, while a server must be able to handle many clients at once, and therefore must handle many times the client's max rate.

answer Aug 12, 2014 by Jagan Mishra
Similar Questions
0 votes

Is it true that current servlet-based websocket implementation will be deprecated due to the implementation of the JSR-356. We are currently implementing a Tomcat 7-based websocket server implementation that we hoped could scale up to at least 50K concurrent connections [or more], but are concerned if there are any known issues and/or limitations with the websocket implementation in Tomcat 7.

We are currently trying to test how high Tomcat 7 will scale with regards to the maximum number of concurrent websocket connections, but have already hit some problems with only 200 concurrent connections. Perhaps it's our multi-threaded client, or Tomcat configuration - not sure at this point. We have the Tomcat Connector configured with maxConnections=50000 and maxThreads=1000, so 200 concurrent connections shouldn't be a problem.

If anyone could elaborate on the Tomcat 7 servlet websocket implementation stability from a highly concurrent aspect that would be great. Additionally, if anyone has achieved 10(s) of 1000(s) of concurrent websocket connections with Tomcat 7, can you share how Tomcat was configured, what OS it was running on, and what client library you used in testing this?

0 votes

I'm looking into a solution that will make extensive use of websockets. Details are unimportant, but here's the question that I'd like to have some insight into. The current implementation (official example) seems independent of the JSR 356.
Is work underway to implement the javax.websocket.* objects, or is what's in org.apache.catalina.websocket it for the enforceable future?

+3 votes

I am looking for help in understanding why the size of the inbound WebSocket message is limited to 125 bytes. I realize that this may not even be the right place for my question, but am still hoping for a clue.

From looking at the RFC 6455, Sec. 5.2 Base Framing Protocol, I am making two conclusions:

  1. There's nothing in it to suggest a payload length asymmetry between inbound and outbound messages. Yet, although I am able to send very large messages to the browser, an attempt to send anything over 125 bytes results in error and a connection shutdown. (I tried FF and Chrome on a Mac).

  2. It's easy to see from the wire protocol why 125 is the simplest payload length but other sizes up to unsigned 64 bit int are supported. So, browser's failure to transmit more than 125 bits indicates both, the most restrictive payload size AND lack of support for fragmented messages.

The error that FF gives reads "The decoded text message was too big for the output buffer and the endpoint does not support partial messages" which to me reads like they are saying that Tomcat did not indicate during handshake that it accepts multi-part messages. True?

I can't speak for others, but for my project 125 bytes is unacceptably small. So, fundamentally what I need to know is this: do I need to implement my own fragmenting or am I missing something?

0 votes

I have a JSF2.0 app that executes (via ProcessBuilder) an external script. This script opens PPTX via PowerPoint ActiveX object, manipulate it and save. It runs on Windows Server 2008 R2 64-bit, 4GB RAM, JDK 7.

When tomcat 7 is launched using startup.bat (with original settings), it works fine.

When tomcat runs as a service, opening the PPTX in the PowerPoint fails because of Out Of Memory error regardless Xmx settings (tomcat7w.exe).

I originally asked PowerPoint forum, but haven't get any explanation yet:
http://answers.microsoft.com/thread/37cbebf6-4003-4ab0-9295-92413aaecc2e

But as the entry point is Tomcat and the only difference between problematic and non problematic behavior is the 'service' mode, maybe there is something related in the tomcat7.exe code base. Just guessing.

Has anybody an idea why both modes behave differently?

+2 votes

Is there a standard way to access ServletContext from a WebSocket ServerEndpoint ?

...