FAQ Entry
Register
* You must register for access to some support pages



Entry #184: What are the resource requirements for using the PC client?

Question
What are the resource requirements for using the PC client?
Answer
To answer this question, the following issues need to be considered:

1. What is the nature of the workload in regard to CPU, Memory, and I/O usage?
2. What are the network connection and bandwidth considerations?
3. How much data will be accumulated, where shall it be stored, and how will it be accessed?

ViewPoint is not particularly CPU intensive, except when performing operations on many connections or files. Depending on the number of connections and Trace File Manager operations, you can select anything from a single P-5 (with as low a frequency as 166 MHz) up to a twin Pentium Pro NT Server. We have some sites monitoring as many as 2 or 3 dozen machines with a P-5, 166. ViewPoint is basically single threaded, and runs many synchronous operations. The second CPU on an NT Server is for the Trace File Manager operations, and other processes and threads that are also running on the system. If you're running AutoWeb and several Trace File Manager operations, you can still get away with a single processor system, if it's fast.

ViewPoint does perform memory-intensive operations. That's especially true when it is maintaining several connections and performing Trace File manager operations. You do not want to stress the processor with swapping. The system should have at least 32 MB of RAM; 64MB is a better minimum for multiple connection processing. For a system that will be asked to maintain many connections, perform regular TFManager operations, and run AutoWeb, 128 MB to 256 MB would be good. Installing Real Memory is the best and least expensive way to avoid future performance problems. Ample real memory reduces CPU utilization and reduces physical I/O.

A typical data sample, or datablock, from ViewPoint is generally between 2K and 7K. On very, very large systems it can get up to 15K. Do not expect anything larger than 7K from IDX. That sample is sent an average of once every 30 seconds. So, the aggregate bandwidth requirement, even for many connections, is not very significant. For example, if you were monitoring 100 systems from a single location, then the most you would expect to be downloading at any one time is a couple of hundred kilobytes, averaging about 300K to 400K over 30 seconds. The average load is likely to be 10K to 13K per second. So, you just need a fast enough CPU to support the sockets, packet transfers, and write the data.

Regarding storage, because the primary storage location is the ViewPoint PC local drive or a Novell or NT network drive, the costs are relatively low. Each live connection probably wants from 8MB to 12MB per day for live data (.VTF). The compressed data files (.VSQ) are approximately 5% of the size of the live files, with the exception of event or process data (.VEV). The event files are usually 50% to 150% of the live data file size. History files don't have event files; only the live files and compressed live files do. Event files are not compressed by ViewPoint because of the slow response and CPU intensive operations to read the process data out. For an example implementation where the Live VTF is 12 MB and the site keeps 12 days of compressed data with 30-second samples and a history file, the space requirement would add up something like this:

Live VTF - 12MB
Live VEV - 12MB
12 Compressed
daily files - @ 600KB ea = 7.2 MB
12 VEV files - 120MB
History file - approximately45 KB per day of history

Total = about 160 MB

This total can be increased or decreased significantly by changing the amount of compressed 30-second data retained. It generally used for problem resolution, and so is not really that useful after a week or two. After that, history files with 1/2 hour intervals are ideal for capacity planning and trend analysis. Several months later you don't care which processes caused the system to be busy, it is only important to characterize what average and peak loads were, and how they changed in relation to various workloads running.