1887

Abstract

Tremendous interest in visualizing massive datasets has promoted tiled-display wall systems that offer an immersive and collaborative environment with an extremely high-resolution. To achieve an efficient visualization, the rendering process should be parallelized and distributed among multiple nodes. The Data Observatory at Imperial College London has a unique setup consisting of 64 screens powered by 32 machines providing a resolution of over 130 megapixels. Various applications have been developed to achieve a high-performance visualization by implementing parallel rendering techniques and incorporating distributed rendering frameworks. ParaView is one such application that targets the visualization of scientific datasets by taking computing efficiency into consideration. The main objective of this project is to leverage the potential of the Data Observatory and ParaView regarding visualization by fostering data exploration, analysis, and collaboration through a scalable and high-performance approach. The primary concept is to configure ParaView on a distributed clustered network and associate the appropriate view for each screen by controlling ParaView's virtual camera. The interaction events with the application should be broadcasted to all connected nodes in the cluster to update their views accordingly. The major challenges of such implementations are synchronizing the rendering across all screens, maintaining data coherency, and managing data partitioning. Moreover, the project is aimed at evaluating the effectiveness of large display systems compared to typical desktop screens. This has been achieved by conducting two quantitative studies assessing the individual and collaborative task performance. The first task was designed to investigate the mental rotation of individuals by displaying a pair of a 3D model, as proposed by Shepard-Metzler, on the screen with different orientations. Then, the participant was asked if both models were the same or mirrored. This would lead to evaluate the individual task performance by studying the ability to recognize the orientation change in 3D objects. It consisted of two levels: easy and hard. For the easy level, the second model was rotated for a maximum angle of 30 on two axes. In contrast, the hard level had no limitation on the angle of rotation. The second task was developed specifically for ParaView to assess the collaboration aspect. The participants had to use the basic navigational operations to find hidden needles injected in a 3D brain model in 90 seconds. In both tasks, the time taken to complete the task and the correctness were measured in two environments: 1) the Data Observatory, and 2) a simple desktop screen. The average correct responses of the mental rotation task have been calculated for all participants. It has been shown that the number of correct answers in the Data Observatory is significantly higher than on the desktop screen despite the amount of rotation. The participants could better distinguish mirrored objects from similar ones in the Data Observatory with a percentage of 86.7% and 73.3% in easy and hard levels, respectively. However, on the typical desktop screen, participants correctly answered less than half of the hard level questions. This indicates immersive large display environments provides a better representation and depth perception of 3D objects. Thus, improving the task performance of visualizing 3D scenes in fields that require the ability to detect variations in the position or orientation. Overall, the average completion time of both displays in the easy task is relatively the same. In contrast, the participants required a longer time to complete the hard task in the Data Observatory. This could be because of the large display space, which occupies a wide visual field, thus providing an opportunity to the viewers to ponder and think about the right answer. In the collaborative search task, the participants found all the hidden needles within the time limitation in the Data Observatory. The fastest group completed the task in 36 seconds while the longest recorded time was around one minute and 12 seconds. However, on the desktop screen, all participants consumed the full 90 seconds. In the small screen environment, the mean of the correct responses is estimated as 55%. The maximum number of needles found was 3 out of 4, which was achieved by only one group. To evaluate the overall efficiency of the Data Observatory, the one-way ANOVA test was used to find significant effects regarding the correctness of both tasks. The completion time was discarded from this analysis because of the differences in the tasks’ nature. The ANOVA revealed a significant impact of the display type in the number of correct responses, F1,48 = 10.517, p < 0.002. This indicates participants performed better in the Data Observatory in contrast to the simple desktop screen. Therefore, these results support the hypothesis of the effectiveness of large displays in improving the task performance and collaborative activities in terms of accuracy. The integration of both system solutions provides a novel approach to visualize the enormous amount of data generated from complex scientific computing. This adds great value to researchers and scientists to analyze, discuss, and discover the underlying behavior of certain phenomena.

Loading

Article metrics loading...

/content/papers/10.5339/qfarc.2018.ICTPD147
2018-03-15
2024-11-07
Loading full text...

Full text loading...

/content/papers/10.5339/qfarc.2018.ICTPD147
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error