Description
Issue Description
I've got a parse server in production in an IoT environment where it's processing around 100 requests a second. This all works great with CPU around 30% and 25ms response times. The parse server is clustered on AWS on two C5.Large instances.
I'm adding in LiveQuery as a separate C5.Large server. Both Parse server and LiveQuery are using Redis to communicate changes and this is working.
The problem I'm seeing is the LiveQuery server with just one client connected has CPU usage between 20-35%. Two clients connected and this jumps to around 40%. More than 3 clients connected and the server crashes within minutes.
I'm looking for some suggestions as to what to try to figure out why the excessive CPU usage and then server crash. To be clear, subscriptions all work from Parse Server to LiveQuery to the client.
More information:
- Parse Server version: 3.0.0
- Parse Client: 2.1.0
- Number of Classes monitored by Live Query: 12
- Using Role ACL
- Using Session Token in client subscriptions
Here is how Live Query server is configured:
let parseApi = new ParseServer({
databaseURI: `mongodb://${config.get('/mongo/userName')}:${config.get('/mongo/password')}@${config.get('/mongo/uri')}`, // Connection string for your MongoDB database
appId: config.get('/parse/appId'),
masterKey: config.get('/parse/masterKey'), // Keep this key secret!
serverURL: `http://127.0.0.1:${config.get('/port/webapp')}/parse`,
logLevel: "ERROR",
sessionLength: ONE_DAY, // in seconds. Set to 24 hours.
schemaCacheTTL: ONE_MONTH_MS, //"The TTL for caching the schema for optimizing read/write operations. You should put a long TTL when your DB is in production. default to 5000; set 0 to disable."
cacheTTL: ONE_DAY_MS, //"Sets the TTL for the in memory cache (in ms), defaults to 5000 (5 seconds)"
cacheMaxSize: 1000000, //"Sets the maximum size for the in memory cache, defaults to 10000"
enableSingleSchemaCache: true //"Use a single schema cache shared across requests. Reduces number of queries made to _SCHEMA. Defaults to false, i.e. unique schema cache per request."
});
// Serve the Parse API on the /parse URL prefix
app.use('/parse', parseApi);
let port = config.get('/port/webapp');
let server = app.listen(port);
// Initialize a LiveQuery server instance, app is the express app of your Parse Server
if (config.get('/parseAppServerIsLocal')) {
debug(`Starting Live Query Server on port ${config.get('/port/parseLiveQuery')}`);
let httpServer = require('http').createServer(app);
httpServer.listen(config.get('/port/parseLiveQuery'));
let liveQueryParams = {
redisURL: config.get('/server/redis')
};
let parseLiveQueryServer = ParseServer.createLiveQueryServer(httpServer,liveQueryParams);
}
Steps to reproduce
Configure Parse server/LiveQuery per above with Redis and connect one client that subscribes to all 12 classes. Queries are not complex. Example:
const nodeStatusesQuery = new ParseObj.Query('NodeStatus').limit(10000);
Observer jump in CPU usage in high-throughput (100 requests per second to parse server).
Expected Results
I'm not sure what the CPU usage should be, but 30% for one client, 40-50% for two and crashing after that doesn't seem right.
Actual Outcome
LiveQuery server with just one client connected has CPU usage between 20-35%. Two clients connected and this jumps to around 40%. More than 3 clients connected and the server crashes within minutes.
Environment Setup
-
Server
- parse-server version (Be specific! Don't say 'latest'.) : 3.0.0
- Operating System: Linux
- Hardware: AWS ELB C5.Large instance
- Localhost or remote server? (AWS, Heroku, Azure, Digital Ocean, etc): AWS
-
Database
- MongoDB version: 3.4.14
- Storage engine: Not sure
- Hardware: mLab hosted on AWS
- Localhost or remote server? (AWS, mLab, ObjectRocket, Digital Ocean, etc): mLab