RPC Timeout promise rejection seemingly not handled cleanly #72
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
When deployed using greenlock-express with cluster: true, I see numerourour occurences of:
(node:65) UnhandledPromiseRejectionWarning: Error: worker rpc request timeout
at Timeout._onTimeout (/app/node_modules/greenlock-express/worker.js:70:20)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
(node:65) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag
--unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 9)(node:112) UnhandledPromiseRejectionWarning: Error: worker rpc request timeout
at Timeout._onTimeout (/app/node_modules/greenlock-express/worker.js:70:20)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
(node:112) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag
--unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 13)Upon initial inspection of the codebase, it appars that the greenlock-express implementation of Node Cluster, is mant to handle these insances internally, by cleanly killing off and re-spawning unresponsive workers. If this is not the intent, then it becomes important for greenlock-express to expose he underlying cluster object (which a review of the codebase suggests, it currently does not), so these events can be captured and properly handled.