DICOM Data Benchmarking
DICOM Systems is known for our signature Unifier, as well as a wide range of other tools. Recognized by top healthcare enterprises, government agencies, and partners for next-generation enterprise imaging interoperability, we are also proven at worldwide deployments. Thanks to our considerable experience we have gained access to insights concerning the practical concerns of those working in healthcare information infrastructure. One of the major clients we have is Stanford Healthcare, who have been kind enough to share their experiences with us.
Stanford Health Care Overview
Stanford Health Care is a Level 1 Trauma Center, the only one of its level between San Francisco and San Jose, with 613 licensed beds and over 12 thousand employees, approximately 1 in 10 of whom are either residents or fellows. It has been recognized by U.S. News & World Reports as one of their Top 10 Hospitals for 2017-18. Its infrastructure is managed in-house, and maintain a standard deployment for engaging vendors and deploying their applications on the enterprise infrastructure. Security and application deployment are handled on a standards-based approach. Between Stanford’s clinical and research teams, they have a lot of specialty imaging applications and areas. This means that for optimal efficiency, it’s important to ensure that each team has access to the images that are relevant to them, but no more.
Dicom Systems Unifier
Our Unifier is built for exactly this purpose. To repurpose an analogy originally used by Trevor Walker, Stanford’s Principal Systems Analyst, we deliver water at the right pressure, right temperature, and right time, optimizing data flow. We find that the Unifier is highly adaptable to different organizations and structures, but at Stanford Healthcare it is mainly utilized to manage the various virtual machines they use for various functions like image routing, modality worklist proxy, and QR proxy.
During implementation and the process of going live, the importance of data benchmarking cannot be overstated. You should be consistent in measuring and judging results of the system in relation to your requirement. Say, for example, you need to move an exam with a thousand slices CT with a full prior exam. You have a SATA disk with limited I/O throughput, so it takes over five minutes on disk speed alone to transfer that, with an empty queue. After that exercise, you might reasonably conclude an upgrade to a solid state drive is in order. Dicom Systems Unifier is compatible with testing and staging scenarios for virtual machines as with physical ones. Stanford Health Care was eager to check that the I/O and network speed would stand up to the traffic that the enterprise was going to push through it. Be sure, in your testing process, to employ different tools to benchmark I/O memory and CPU depending on the time of day. This benchmarking data during pre- and post-go-live is of great use in reference when checking down the line to make sure performance hasn’t degraded– for example, six months after deployment. Replicating tests to ensure that you can accurately pinpoint any problems the system might be having will make everyone’s jobs a lot more straightforward.
Data Benchmarking in Action
Stanford Healthcare puts a lot of stock in being able to assess these detailed metrics in real time. With multiple imaging systems active accessing their PACS and other systems, they need maximum visibility into their traffic. The Dicom Unifier layer in-between provides that for their imaging support teams and troubleshooting teams. When a query is made, the first step in their investigation will be checking out the Dicom router, assessing the statistics and logs that the Unifier has recorded. For example, comparing the general performance to specific examples of performance can help you identify where issues of data transfer may be related to issues other than software limitations. It’s also easier to compare peak and off-peak traffic, weekend traffic, and other valuable data comparisons. By using this greater access to networking data to identify bottlenecks in the system, you can massively improve your ability to optimize the system and eradicate inefficiencies.
One of the Unifier’s other strengths is intelligent routing. Certainly for systems like Stanford Healthcare, where there are discrepancies between the form of patient ID the archived PACS uses and the format the Radiology Information System supplies, using the Dicom Unifier to support the quantity of tag morphing that must necessarily occur makes it essentially a non-issue. The smart imaging routing and retrieve proxy makes any type of migration very straightforward.
With an organization as large as a Level 1 Trauma Center, so many different departments including research, pediatrics, and clinical, require large quantities of data movement. Deploying the Dicom Unifier allows for the pressure on the PACS from ingestion and direct query retrieval to be offloaded. Especially when using a legacy PACS from 10 or 15 years ago to shift datasets for use with contemporary AI, machine learning algorithms, and the like that need to be collaborated on by external research institutions, the antiquated system can struggle. The Dicom Workflow Unifier means that the data can be anonymized and pushed securely in transit and while at rest. (If the data needs to be identifiable, this can also be achieved after successful delivery.)
Future Planning for Peak Performance
Of course, the larger the center and the more data that is being stored, the more cumbersome it becomes to migrate. Pulling individual studies out and pushing them to the cloud is a manageable method, but not fast enough for clinical or research purposes. Using the Unifier as the front end for modalities, with other secure, well-known technologies means that the option of migration avoidance can be talked about. PACS replacement or enhancement of existing PACS is instead an option.