Your monitoring system reported an issue of Mercury MMSC P2P CPU usage suddenly spiking at an unspecified time. A sample log generated by a monitoring system (in this case Nagios) is displayed below:
***** Nagios Monitor XI Alert *****
Notification Type: PROBLEM
Service: System Load
In the past, such a spike was caused by a subscriber sending a message to hundreds of other subscribers containing an image of a large size. This caused a spike in fetch requests. CPU utilization increased due to processing many fetch requests with a large image in parallel.
When Mercury (and all other MMSCs) receives a message, no matter from whom, it saves it to internal storage as is and notifies B-party that a new message was received. When the MMS application on the B-party handset starts fetching the message, MMSC transcodes the message accordingly with the B-party handset capabilities. When there are no devices that can receive that large pictures as is, Mercury transcodes this picture when performing each fetch request. In some cases, this picture has to be transcoded several times because Mercury makes several attempts to reach the target size and resolution of the picture. So, no matter where a message was received from, it will be transcoded when a fetch request is performed. It will cause a spike in CPU utilization when hundreds of such operations are performed in parallel.
Mercury doesn't provide a protection mechanism for such cases and it doesn't monitor CPU load. However, Mercury can work with external transcoding servers. One or more additional machines can be configured to transcode message content and Mercury will balance the loading between these servers.