I. Introduction

Studies reveal that often group members collaborate when searching for information even if they were not explicitly asked to collaborate [1]. The activity that involves a group of people engaging in a common information seeking task is called Collaborative Information Seeking (CIS). Over the past few years, CIS research has focused on providing solutions and frameworks to support the process [2]. However, work in this field to date has always assumed that information seekers engaged in CIS activity are using the same access modality, the visual modality. The attention on this modality has failed to address the needs of users who employ different access modalities such as haptic and/or audio. Visual Impaired (VI) employees in a workplace may often have to collaborate with their sighted team members when searching the web. Given that the VI individual's search behaviour is challenged by poor web design and the shortcomings of current assistive technology [3][4]; collaboratively engaging in web search activity with peers can be considered as a major barrier to workplace inclusion. This study explores the under investigated area of cross-modal collaborative information seeking (CCIS), that is the challenges and opportunities that exist in supporting VI users to take an effective part in collaborative web search tasks with sighted peers in the workplace.

II. Study Design

To develop our understanding of the issues, we investigated the CCIS activity of pairs of VI and sighted participants. The study consisted of an observational experiment that involved 14 VI and sighted users completing two web search tasks which was followed up by scenario-based interviews which conducted with seven of the 14 pairs from the experiment. We conducted the experiment to examine the patterns of CCIS behaviour and the challenges that occur. In the scenario-based interviews we examined the techniques used, tools employed and the ways information is organized both for individual and collaborative use. In the observational study, all VI participants used a speech-based screen reader. Each pair was given two search tasks; one task was performed in a co-located setting and the other task was performed in a distributed setting. For the co-located task, the participants were asked to work collaboratively to organize a trip to the United States while for the distributed task they were asked to plan a trip in Australia. In the distributed condition, participants were seated in different locations and were told that they were free to use any method of communication they preferred; 5 pairs used emails and 9 pairs used Skype. Seven pairs from the observational studies took part in the scenario-based interviews. The interviews involved the interviewer describing a CIS activity to the participants followed by four scenarios that contain questions relating to the management of the retrieved information.

III. Observational Study Findings

A. Division of Labour

In the co-located condition, discussion about the division of labour occurred at two levels. Firstly in the initial discussion and secondly as a result of one participant interrupting his/her partner in order to complete a certain action. Three reasons were identified for these interruptions: (1) When viewing large amounts of information with assistance from their sighted participant. (2) When browsing websites with inaccessible components, VI users asked sighted participants to perform the task or assist them in performing it. (3) The third reason is related to the context of the task. In contrast, in the distributed condition, the discussion about the division of labour occurred only at the beginning of the task. The pair divided the work and started work independently. The participants only updated each other about their progress through the communication tool. Unlike, the co-located sessions, collaboration in the later stages was not observed. Additionally, VI participants' requests for assistance were fewer in this condition as they seemed to be more reluctant to ask for support in the distributed condition. When a VI participant encountered an accessibility issue, they would try on average 3 websites before asking their sighted partner to assist them. The majority (13 pairs in the co-located and 12 pairs in the distributed setting) divided the labour so that sighted participants performed booking related activities and VI participants performed event organization activities. VI participants emphasised that they chose this approach to avoid any issues related to accessibility. Vigo and Harper [5] categorized this type of behaviour as “emotional coping”, in which users past experience relating to an inaccessible action in a similar webpage or similar task affects their judgment on website use or tasks conducted. It is clear from the results that VI users put some thought into either dividing the labour in a specific way or to find some other way to get around the issues encountered.

B. Awareness

In the co-located condition, the main method to implement awareness was verbal communication. In the distributed condition, the only method to implement awareness was through email and instant messaging. To facilitate awareness of partner's activities while performing the task, participants informed their partners about actions. The participants tended to provide their partner constantly with information about their current activities to enrich group awareness in the absence of a tool that supported awareness in both conditions. In fact, pairs who completed more of the task in both conditions communicated more information about their activities. It was observed that when more information to avoid duplication of effort was communicated, the higher was the performance in the distributed condition. This indicates that making this type of information available between distributed collaborators might enhance their ability to complete tasks efficiently. This was not the case in the sessions in the co-located condition, as the sessions with lowest and highest performance reported the same amount of information communicated relating to duplication of effort. However, it was observed that pairs who performed well in the co-located sessions have communicated more information related to the actions they are performing that is not essential for the other pair to know. This indicates that facilitating the appropriate type and amount of awareness information in each condition is crucial to team performance and can increase team productivity [6].

C. Search Results exploration and management

Collaboration occurred mainly in two stages of the information seeking process: the results exploration and results management. In the results exploration stage, collaboration was triggered when VI participants viewed large amounts of information with their sighted partner's assistance or by both partners deciding to explore search results together. The average number of search results viewed collaboratively was higher than the average number of search results viewed by VI participants alone. Screen readers' presentation of large volumes of data imposes a number of challenges such as short term memory overload and a lack of contextual information [3]. This stage is highlighted as one of the most challenging stages faced by VI users during the IS process [4]. The amount of retrieved information kept by sighted users is nearly double the amount of information kept by VI users. The reasons for this were twofold: (1) Sighted users viewed more results than their VI partners. (2) The cognitive overhead that VI users experience when switching between the web browser and an external application used to take notes. This increased cognitive load is likely to slow down the process. The effect of this is more apparent in the distributed condition where VI users are required to switch between three applications: the email client or instant chat application, the web browser and the note taking application.

IV. Scenario-based interviews findings

The scenario-based interview is a tool that allows exploration of the context of the task. This type of scenario narrative approach provides a natural way for people to describe their actions in a given task context. The scenario-based interviews revealed that collaborative searching is quite a common practice as all the participants were able to relate the given scenario with similar activities they had undertaken in the past. It found that often ad hoc combinations of everyday technologies are used to support this activity rather than the use of dedicated solutions. There were clear instances of the use of social networks such as Twitter and Facebook to support the sharing of retrieved results by both VI and sighted interviewees. Individual and cross-modal challenges were also extensively mentioned by VI interviewees, as current screen readers' fall short of conveying information relating to spatial layout and helping users form a mental model of web pages congruent with that of their sighted partners. It is clear that the VI participants interviewed were fully aware of the drawbacks that the serial nature of screen readers impose on their web search activities. In fact, these challenges have led them to choose to perform some web search activities collaboratively when that was an option. In the interviews, sighted users tended to use more complex structures for storing retrieved information, such as under headings or multi-level lists, while VI users tended to use simpler flat or linear lists of information.

V. Implications

The studies we carried out highlighted the challenges encountered when VI and sighted users perform a collaborative web search activity. In this section we propose a number of design implications for the design of CCIS systems. Due to space limitations in this paper, we only present three.

1. Overview of Search Results

Developing a mechanism that provides VI group members with an overview of search results and the ability to focus on particular pieces of information of interest could help in increasing VI participants' independence in CCIS activities. VI web searchers are likely to perform the results exploration stage more effectively and efficiently if they could firstly get a gist of results retrieved and could then drill down for more details as required. This will advantage both individual and collaborative information seeking activities.

2. Cross-modal Shared workspace

Having a common place to save and review information retrieved can enhance both the awareness and the sense making processes and reduce the overhead of using multiple tools, especially in the case of VI users, who do not have sight of the whole screen at one time. The system should support a cross-modal representation of all changes made by collaborators in the shared workspace. As changes in a visual interface can be represent it in colours, changes in the audio interface might be represented by a non-speech sound or a modification to one or more properties of the speech sound, for example timbre or pitch.

3. Cross-modal representation of collaborators Search Query Terms and Search Results

Allowing collaborators to know their partner's query terms and results viewed will inform them about their partner's progress during a task. Additionally, having a view of partners search results can allow sighted users to collaborate with their VI partners while going through search results.

VI. Conclusion

This paper discussed CCIS; an area that has not previously been explored in research. The studies presented in this paper is a part of a project that aims to provide support to the CCIS process. The next part of the project is to investigate the validity of the design implications in supporting CCIS and their effect on collaborators performance and engagement.


[1] Morris, M. R. (2008). A survey of collaborative web search practices. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, New York, USA. ACM.

[2] Golovchinsky, G., Pickens, J., and Back, M. (2009). A taxonomy of collaboration in online information seeking. In JCDL Workshop on Collaborative Information Retrieval.

[3] Stockman, T., and Metatla, O. (2008). The influence of screen readers on web cognition. Proceedings of Accessible design in the digital world conference. York, United Kingdom.

[4] Sahib, N. G., Tombros, A., and Stockman, T. (2012). A comparative analysis of the information-seeking behavior of visually impaired and sighted searchers. Journal of the American Society for Information Science and Technology.

[5] Vigo, M., and Harper, S. (2013). Coping tactics employed by visually disabled users on the web. International Journal of Human-Computer Studies.

[6] Shah, C., and Marchionini, G. (2010). Awareness in collaborative information seeking. Journal of the American Society for Information Science and Technology.


Article metrics loading...

Loading full text...

Full text loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error