CS计算机代考程序代写 android algorithm Week 4 – Notifications
Week 4 – Notifications
Mobile and Ubiquitous Computing 2020/21
Sandy Gould, School of Computer Science University of Birmingham
Overview
This week our focus is on notifications on mobile devices. Anyone who owns a smartphone will be familiar with these – they pop-up day and night, telling us we have messages, encouraging us to use an app, letting us know when payments have left our accounts. The growing numbers of devices that are able to create notifications means we are receiving more notifications than ever. This week we will be focusing on this important aspect of mobile computing, addressing the following key issues:
– What kinds of notifications do people receive?
– Why are these notifications disruptive?
– Why do we still have notifications if they are disruptive?
– How can we improve notifications?
– Creating and controlling notifications in Android
Important concepts
We’re going to be covering technical concepts this week, including how to create notifications with Android. We’re also going to be covering more theory-based topics to help us understand why notifications are so disruptive.
The prevalence of mobile notifications
Mobile notifications are increasingly prevalent. They are generated by our communication applications like SMS, email or WhatsApp, by social media networks, but also by a whole host of other applications. Pretty much all of the apps on your phone, whether they’re for games, for banking, for shopping will generate notifications of some kind. Pielot et al. (2014) ran an observational study of smartphone notifications, using tracking software installed on devices. They found that on average their participants received 63.5 notifications each day. This study would have been conducted in 2014 or earlier and given the increasing embeddedness of smartphones in people’s lives we can probably assume that this number is substantially higher these days.
Pielot et al. (2014) analysed the times of day when notifications appeared on people’s devices. As you’d expect, activity was lowest at night. Email activity peaked during the middle of the working day and messaging apps had two peaks, one around lunch time and another after working hours. Overall the study demonstrates that notifications are highly prevalent. This makes them something that is worth studying in their own right.
The disruptiveness of notifications
Notifications are disruptive because they act as a perceptual draw and they require cognitive resources to process them. They are intentionally a perceptual draw – they want to get our attention – and to this end notifications cause screens to flash, phones to buzz, LEDs to pulse and ringers to sound. Even if we’re already looking at our phones, phones still use these methods to draw attention to notifications, as well as showing the notifications on screen. These strategies are effective in gaining our initial attention and it can be hard to ignore. Perhaps you’ve been in the situation where someone else has notification alerts on in the cinema or on a train? It’s tough to block them out.
Once a notification has our attention it takes up our cognitive resources – our thinking capacity. You see the that a notification has arrived and instantly you need to make a decision. Do you clear the notification? Do you leave it until later? Do you decide to interact with it? If you decide to interact with the notification, do you remember what it was you were doing before you
switched your attention to the notification? If you decide to leave it until later do you remember this later on? All of this takes mental effort, even if only for a brief moment. And it all starts to add up if you’re doing it 60+ times or more each day!
Why is this kind of diversion disruptive? The answer lies in our working memory. Working memory is what psychologists call the temporary storage we have in our heads that helps us keep track of what we’re doing at any one moment. You will make extensive use of this when you’re programming to keep track of the information you’re trying to process and the algorithms you’re using to process it. It is distinct from the other kinds of memory that allow us to remember what our names are or how to throw a ball. The important thing to know about working memory is that it is very limited in capacity and it is ephemeral – forgetting is easy. You’ve probably experienced this when you’ve gone into another room to fetch something and suddenly you can’t remember why you’re standing in the room you’re standing in. (Science tells us that walking through doorways causes forgetting; (Radvansky et al., 2010).
To understand what makes interruptions like notifications more or less disruptive we can turn to a theory of working memory called ‘Memory for Goals’ (Altmann & Trafton, 2002). This theory posits that as we work through activities we create ‘goals’ like ‘send a message to a friend’. Sub-goals that help us achieve this goal are also created (like ‘open messaging app’). Memory for Goals says that at any one time we can have one goal as our focus. When we are working on a goal (e.g., send a message to a friend), the goal gets stronger in our working memory; it increases in activation. The higher the activation of a goal the easier it is for us to remember it and the less likely we are to forget it. This makes a kind of intuitive sense: it’s usual that in the very middle of typing a message to a friend that we forget what we’re doing. This act of working on a task means we’re rehearsing the goal, and this rehearsal boosts activation.
The flip side is that when we are not working on a goal, we are not rehearsing it and its activation levels are always falling. Once the activation level of a given goal falls below an interference threshold then there is a high chance that it will be forgotten.
This is why notifications are disruptive. Suppose you are in the middle of composing that message to your friend. You’re interrupted by a notification from a friend that seems very urgent so you give them a call. Up to the moment of the notification, your goal, to send a message to your friend, is being worked on and so is increasing in activation. It’s not going to be forgotten. Then you switch to a different task, calling your friend. At this moment, your goal of messaging your friend is suspended and begins to lose activation. Your new goal, to have a phone call, is now gaining activation instead. The longer this call goes on, the more activation that your ‘send message’ goal loses. Eventually, after being on the phone for a long time, the activation of your goal to send a message falls below the interference threshold and you forget about it. Once the phone call finishes, you start working on some other activity having forgotten to message your friend.
Even if your goal of ‘send message’ is not forgotten, when you finish an interrupting task there are further cognitive costs of re-engaging with the task you were working on before you were
interrupted. When you resume you need to re-encode parts of the task you might have forgotten (i.e., working out what the sentence you were half-way through said and was going to say). This re-encoding takes time and effort, but it is also error prone; you might resume your sentence having misunderstood what it was you were going to say, for instance. These re- encoding costs can start to add up if we are constantly switching between tasks.
It’s important to realise that Memory for Goals is a model-based theory of memory. If you were to look at a brain you could not point to a goal. You could not point to an interference threshold. It’s a model, but one that empirical observation suggests is good for making predictions about behaviour.
Why do we continue to receive notifications knowing that they are disruptive?
Given notifications seem to be inherently ‘bad’ for us, why do we have systems that generate so many of them? It’s a combination of two things. Notifications are very often useful, conveying information that is important or time-sensitive. Having utility means people are never likely to turn notifications off. Notifications also have a habitual or compulsive element. In other words, we check for and attend to notifications out of habit, even if the notifications themselves have little or no utility.
The utility of notifications comes from the information that they carry (it could be valuable information telling you that you’ve won a prize) or the fact that sometimes this information is time-critical (claim your prize in the next hour!) Notifications also have utility in the fact that they can, to some degree, relieve boredom in a quiet moment. We know that people will willingly submit themselves to electric shocks rather than sit in a room with nothing to do but think (Wilson et al., 2014), so it’s not surprising people feel there is value in having notifications appear when they’re feeling bored.
Sometimes we find ourselves unlocking our phones looking for notifications for no other reason than that it’s something we do out of habit. Like Pavlov’s dogs, we have become conditioned to associate notifications with ‘good things’. Over time we don’t even need ‘good things’ to happen, we end up with a direct associate with receiving notifications, any notifications, and a positive feeling. So we check habitually. For some people this
habitual checking becomes compulsive, which means people start to feel anxious about not having notifications (Pielot & Rello, 2017). Designers understand this habitual nature and make use of it to try and get you to buy things that you don’t need or interact with services that are of no relevance to you.
Making notifications better
Given that notifications are disruptive but very much here to stay, what can we do to make them better? We might do this by altering how they look, when they appear or by trying to make them more aware of a user’s current context. These, in turn, could reduce the cognitive load and the perceptual draw of notifications.
One solution to creating better notifications is ‘CallHeads’
(Böhmer et al., 2014). Historically when calls came into
smartphones, the call accept/reject screen would come into the
foreground, replacing whatever you were doing. So if you were
working on an important email and then someone called you,
suddenly your email would be replaced by a accept/reject call screen. You lost the context of your task and you were required to make a decision (accept/reject/hold etc) before you could return to your previous activity. CallHeads proposes an alternative to this – instead of a whole screen activity, you get a small inobtrusive pop-up in the corner of your screen telling you that you have a call coming through. Critically, this does not stop you working on your current task, and you can completely ignore the pop-up if you want. If you do want to accept or reject the
call then the widget allows you to do this without leaving your current context. You can also dismiss the notification without it being a ‘reject’ – it just keeps ringing for the person calling as if you hadn’t noticed their call. It’s not a perfect solution though, because we still have that perceptual draw and that cognitive demand of deciding what to do with it. The authors tested the app with 10,500 regular users and the concept was incorporated into later version of Android.
Another way we can reduce notifications is to control when they appear to users. One of the supposed advantages of push notifications is that information becomes available instantly. But how many of the notifications that we get are truly urgent? How often have you received a notification in the last few days where having to wait an hour for the notification to arrive would’ve made a big difference? It’s probably not all that many. There is a long history of researchers trying to find ‘good’ moments for notifications: moments that are less disruptive.
Fischer et al. (2011) focused on ‘breakpoints’ in tasks. The idea of a ‘breakpoint’ in this context relates to the fact that as we work through tasks there are periods of high and low cognitive load. Often these periods of low cognitive load occur between two tasks. Remember from Memory for Goals; when we’re interrupted the goal of the task that we were working on loses activation until it’s forgotten. But if we hold an interruption like a notification back until someone has just finished a task, it doesn’t matter if that goal is forgotten because it’s complete.
Moreover, at the points between tasks we have less information encoded in our working memory. When you’re right in a middle of a complex task (say working on an Android Studio labsheet), you’re trying to keep a lot of information encoded in working memory. If you’re interrupted by a notification you could potentially lose this information and have to re-encode it. That is one of the things that makes notifications disruptive. If notifications come between tasks this isn’t a problem – there is no task information to lose!
So Fischer et al. ran a field experiment to investigate how the timing of notifications affected their disruptiveness. They found, as you’d expect, that people dealt with notifications and their tasks more quickly when notifications arrived between tasks. What was also interesting was that the workload of the secondary task (i.e. the effort of dealing with the substantive content of the notification) also influenced interruptability. People are less likely to accept being interrupted by a very complex task than by a simple one. This makes sense if you think about it – people have an implicit understanding of what’s more disruptive and they realise that more complex interrupting tasks are going to cause them more problems in terms of completing their original primary task.
Mehrotra et al. (2015) used machine learning to understand the patterns of behaviour of people dealing with notifications. Based on current context (e.g., time of day) and the learned associations between when notifications appeared and the ways people dealt with them, the authors developed a notification management system that, based on the category of a notification (e.g., social media, productivity), held the notification until the most optimal time and location for it to be released.
You can see in Mehrotra et al.’s work this idea that increasing the context awareness of a notification system could reduce the disruptiveness of notifications. One product, the ‘Dot’, claimed to improve the experience of notifications through location-bound notifications from Bluetooth beacons positioned in strategic locations. The good things about this product is that there is a degree of context awareness in that notifications appear in a specific location with information relevant to that location. However, each requires set-up and it does nothing to reduce the number of notifications that you receive. Realistically how does this make notifications less disruptive? It seems like they are now taking up even more of our attention as we wander around house getting messages from beacons.
Any notification system that requires people to spend time and effort to set-up is already more disruptive than one that doesn’t require this kind of setup. So any solution that requires a lot of user input is not going to work. This is one of the reasons that people have employed machine-
learning systems that attempt to determine how people are themselves filtering notifications and then act appropriately in controlling the flow of notifications.
Choosing the right kind of notification for Android
Android supports two kinds of notification, a ‘proper’ notification which appears at the top of the screen and in your notification drawer and a ‘toast’ which is a very small notification that pops up during the course of using an app. The type you would use in practice depends on the situation. Toasts are used for
small contextual notifications which appear while using an app (e.g., ‘Email sent successfully’). Notifications are used for sending less ephemeral information that a user might want to consume immediately (e.g., a text message arriving) or the next time that they look at their phone.
Since Android 8 (API 26),
Android has attempted to
improve the granularity of
notifications by introducing
NotificationChannels to the
NotificationManager that is
used to control the
introduction of particular
notifications. The idea here is that a given application might be generating notifications for a variety of reasons. For example, a news application might generate notifications for breaking news, but also for regular features or live scores for sports. As a user, some of these notifications might be of significant interest and some might be of no interest at all. The idea of channels is that users can control the flow of notifications from different sources without having to receive all notifications or completely disable all notifications.
References
Altmann, E. M., & Trafton, J. G. (2002). Memory for goals: An activation-based model. Cognitive Science, 26(1), 39–83. https://doi.org/10.1016/S0364-0213(01)00058-1
Böhmer, M., Lander, C., Gehring, S., Brumby, D. P., & Krüger, A. (2014). Interrupted by a Phone Call: Exploring Designs for Lowering the Impact of Call Notifications for Smartphone Users. Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, 3045–3054. https://doi.org/10.1145/2556288.2557066
Fischer, J. E., Greenhalgh, C., & Benford, S. (2011). Investigating episodes of mobile phone activity as indicators of opportune moments to deliver notifications. Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, 181–190. https://doi.org/10.1145/2037373.2037402
Mehrotra, A., Vermeulen, J., Pejovic, V., & Musolesi, M. (2015). Ask, but Don’t Interrupt: The Case for Interruptibility-aware Mobile Experience Sampling. Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, 723– 732. https://doi.org/10.1145/2800835.2804397
Pielot, M., Church, K., & de Oliveira, R. (2014). An In-situ Study of Mobile Phone Notifications.
Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services, 233–242. https://doi.org/10.1145/2628363.2628364
Pielot, M., & Rello, L. (2017). Productive, Anxious, Lonely: 24 Hours Without Push Notifications.
Proceedings of the 19th International Conference on Human-Computer Interaction with
Mobile Devices and Services, 11:1–11:11. https://doi.org/10.1145/3098279.3098526 Radvansky, G. A., Tamplin, A. K., & Krawietz, S. A. (2010). Walking through doorways causes forgetting: Environmental integration. Psychonomic Bulletin & Review, 17(6), 900–904.
https://doi.org/10.3758/PBR.17.6.900
Wilson, T. D., Reinhard, D. A., Westgate, E. C., Gilbert, D. T., Ellerbeck, N., Hahn, C., Brown, C.
L., & Shaked, A. (2014). Just think: The challenges of the disengaged mind. Science, 345(6192), 75–77. https://doi.org/10.1126/science.1250830