dimanche 19 avril 2015

nginx and node.js optimisation - setTimeout having adverse affect

I'm having a little trouble in getting a node.js app fronted by nginx to perform as expected. I've stripped everything back to it's basics to ensure I have optimised things correctly.


For every request the node.js app will wait 10 seconds and then return a response. The timeout is required for the purposes of this app (going to be used for load/performance testing). The index.js I have for the app is:



var requestCount = 0;
var responseCount = 0;

http.createServer(function (request, response) {
requestCount++;
console.log('Dealing with request ' + requestCount);
setTimeout(function () {
responseCount++;
console.log('Sending back response ' + responseCount);
response.writeHead(200, {"Content-Type": "text/plain"});
response.end('Hello World\n');
}, 10000);
}).listen(config.serverPort);


I have then fronted the node.js app with nginx. The intention is to have a few instances of node.js app with nginx handling SSL and acting as a load balancer. My nginx.conf is below



worker_processes 8;
pid /var/run/nginx.pid;

events {
worker_connections 8192;
multi_accept on;
use kqueue;
}

worker_rlimit_nofile 65536;

http {

include mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 128;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

gzip on;
gzip_disable "msie6";

upstream backend {
server 127.0.0.1:8001;
}

server {
# proxy to backend
location / {
proxy_redirect off;
proxy_connect_timeout 10;
proxy_send_timeout 15;
proxy_read_timeout 20;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://backend;
}
}

include /etc/nginx/sites-enabled/*;
}


Now, my application fires 300 concurrent requests in a one go. When fired directly at the node.js app, everything is fine and I get 300 responses back. However, when I try to fire the requests through nginx, only 248 responses come back. The other 52 requests timeout. The http client I am using in the application has a requestTimeout of 60 seconds and is set to 'Connection: Keep-Alive'.


When I look at the log file for node.js app, nginx has not sent the outstanding 52 requests to the node.js app. They seem to be blocked somewhere. In the nginx error log, the only thing written is:



*559 setsockopt(TCP_NODELAY) failed (22: Invalid argument) while keepalive, client: 127.0.0.1, server: 0.0.0.0:80


This error does not worry me per se, as I understand it to mean that the client has closed the connection as per it's 60 second timeout.


The weird thing is, if I set the timeout on the node.js app to 50ms (instead of 10000ms) everything works fine! The timeout seems to be having an adverse affect on nginx's ability to pass all 300 requests upstream.


Is anyone able to point out any missing nginx optimisations or errors in the node.js app I have written? I am a little stumped on why the setTimeout would be having this affect. Is there something about setTimeout(...) which blocks a socket? Appreciate any help offered.


Thanks!


Aucun commentaire:

Enregistrer un commentaire