iOS 8 Beta Testing using TestFlight

Xcode 6 and iOS 8 has been released together with a new way of beta testing iOS applications. This is an introduction and a step-by-step guide for setting up an iTunes Connect and TestFlight Beta Test with internal testers.

After months of beta testing iOS 8 myself, I’ve been waiting for this day to come, the day when Apple will add scalability and a new dimension to beta testing of your iOS applications. Before this day there’s been a lot of speculating and predicting of what will happen around beta testing. We mainly know the answers that we, the team at Beta Family and every iOS developer, were looking for. With Xcode 6 and iOS 8, which are the requirements for this service, there’s a whole new possibility available when it comes to beta testing your application. Through iTunes Connect you will manage your testers and TestFlight will make your app available for your testers. Xcode 6 and iOS 8 are available for developers inside the iOS Dev Center. Developers will no longer be held back because of UDIDs when beta testing, instead they will handle testers’ Apple IDs.

In the beginning you’ll only be able to beta test your app on 25 internal testers. But soon, 1000 external testers, which are not a part of you development organization will also be available when beta testing. When deploying an app to external testers the app has to go through a Beta App Review and must follow the App Store Review Guidelines before the beta testing can start. A review will be required for every new version that contains significant changes. You’ll be able to test 10 apps at the same time. Read more about it here.

All this is a big advancement for Apple in beta testing and it’s a welcome change for both developers and Beta Family. We, Beta Family, will make several changes and updates to our service. First we’ll organize testers’ Apple IDs for easy distribution. We’ll also add options for developers to choose when they want to test a beta version through iTunes Connect and whether they want to test it on internal or external testers.

1. Sign in to iTunes Connect and add iTunes Connect users under “Users and Roles”. When you’ve entered the details for the user you’ll be able to choose the user’s role. “Technical” (Read Only) is the one to prefer if you don’t want to give the user access to maintaining your available apps.

The users you’re adding will receive an e-mail. You can add and delete users whenever and however you want. Every user will have to verify their e-mail.

invitation mailWhen the users have verified their e-mail you’ll find them under “Users and Roles”. Click on the e-mail for the user and turn on the switch “Internal Tester”.

2. Install Xcode 6 from the iOS Dev Center. If you want to keep previous versions of Xcode you can create a folder inside Applications and move your existing Xcode version to that folder.

3. Archive your project, choose Product > Archive.

Now click on the “Submit…” button.

Choose your Development Team to use for provisioning.

Submit your app.

4. In your iTunes Connect account you can now browse to your app, you’ll find it under “My Apps”. Go to “Prerelease”, here you’ll find your uploaded build. Enable beta testing for this build by turning on the switch on the right hand side of the screen.

5. Inside “Prerelease” there’s a submenu containing “Builds” and “Internal Testers”. Now that you’ve enabled TestFlight beta testing you can invite testers under “Internal Testers”. Checkmark the testers you want to invite and click the “Invite” button.

Testers will receive an e-mail with an invite.

When the testers click “Open in TestFlight” they will be redirected to the TestFlight app and asked to install the app.

Testing is now under way and a orange dot will appear next to the beta version of the app.


We, Beta Family, will still provide developers with our awesome community of testers, but now without the trouble involving UDIDs. Tester management will now be easier with Apple IDs which Beta Family, of course, will provide for you. Also their will be zero file handling and this will without doubt speed up the testing process since the upload and adding users can be managed instantly.

Note: Of course you will be able to use the system just like before.

Seven disastrous software bugs and fails

While functioning software allows scaling of productivity, malfunctioning software scale chaos and havoc. Naturally, some bugs may cause worse problems than others. Here are seven of the worst and most spectacular software bugs over the decades of software development, not listed in any particular order. Use these cautionary tales as motivation to test properly!

1. Thousands of patients listed as dead

The hospital St Mary’s Mercy in Michigan, USA wrongfully informed the authorities and Social Security that 8, 500 of their patients had passed away between October 25 and December 11 of 2002. A spokeswoman for the hospital explained that an event code in the patient-management software had been wrongfully mapped: the code 20 for expired was used instead of 01 for discharged. Needless to say, a plethora of legal, billing and insurance issues followed the incident.

2. Violent offenders paroled in California

The state of California had been asked in 2011 to reduce its prison population by 33, 000, with preference to non-violent offenders. Again, a mapping error occurred, reversing the preference criteria and instead giving non-revocable preference to approximately 450 violent inmates, exempting them from having to report to parole officers after their release.

Read more:

3. Stock trading algorithm buys high and sells low

The market-making and trading firm Knight Capital Group were using a trading algorithm that during a single 30-minute period inexplicably decided to oppose sound economic strategy by instead buying high and its stock dropped 62 percent in just one day. The company would not describe the issue in detail but referred to it as a “large bug, infecting its market-making software”.

Read more:

4. NASA miscalculation destroys Mars Orbiter

NASA launched the Mars Climate Orbiter by the end of 1998, but due to an error in the ground-based software, the Orbiter went missing in action after 286 days. The orbit had been incorrectly calculated, in large part thanks to different programming teams using different units. As a result, the thrust was almost 4.5 times more powerful than intended, leading to the wrong entry point into the Mars atmosphere. The $327 million Orbiter was disintegrated into pieces.

Read more:

5. Delay in tracking system lead to missile explosion

During the Gulf War and Operation Desert Storm in 1991, the Patriot Missile System failed to track and intercept a Scud missile aimed at an American base. The software had a timing issue which caused a sensor delay that would continue to grow until system reboot, with detection accuracy loss after approximately 8 hours. On the date of this incident, the system had been continuously operating for more than 100 hours, resulting in such a big a delay that the software was actually looking for the missile in the wrong place. The Scud missile hit the American barracks leading to 28 dead and over a 100 injured.

6. Radiation therapy device delivers lethal doses

During the mid-80’s, a medical radiation therapy device called the Therac-25 was used in hospitals. The machine could operate in two modes: low-power electron beam or X-ray, which was much more powerful and only supposed to function if a metal target was placed between the device and the patient. The previous model, Therac-20, had an electromechanical physical safety to ensure that this metal target was in place, but it was decided that a software lock would replace it for the Therac-25. Due to a particular kind of bug called a “race condition” it was possible for a device operator typing fast to bypass the software check and mistakenly administer lethal radiation doses to the patient. The issue resulted in at least five deaths.

Read more:

7. Phone switches reset repeatedly, disabling all long-distance calls

US long-distance callers using the operator AT&T on January 15th, 1990 found that no calls were going through. The long-distance switch network was infinitely rebooting for nine hours, first leading the company to believe they were being hacked. The issue turned out to be software related, in which a timing flaw lead to cascading errors.

Any malfunctioning switch was designed to reboot, which took 4-6 seconds. During this time calls were rerouted to other switches. The rebooted switch would then signal that it was back online, so that it could begin routing calls again.

The issue occurred when multiple messages were received during the reboot period, due to AT&T tweaking their code to speed up the process. This was interpreted as a sign of faulty hardware and as a safety measurement that switch would in turn reboot itself as well. And naturally, while the replacement switch was down it too received the same conflicting signal. Cascading messages between switches now made them reset each other infinitely.

The problem was eventually solved by AT&T by reducing the message load on the network, allowing all 114 switches in the network to recover. The issue cost an estimated $60 million in lost revenue first…

Read more:

Additional sources:

Stock photos by:

Peter Skogsberg
Peter Skogsberg is an ISTQB certified software tester and holds a Master’s degree in Information Technology. He has been on both the development and the testing sides of several mobile and web applications.

Testing techniques – difference between white box and black box testing

This post is part of a monthly series of articles on software testing. We intend to cover testing in general, but especially targeted at mobile apps. Developers and testers alike will find useful testing tips and techniques.

We explain the difference between white box and black box testing, followed by some helpful and techniques. Along the way we also define some names and word.

What color is the box?
A common distinction between two vastly different types of software testing, is that between white box and black box. You may associate the latter term with plane crashes, but in this case it denotes that you are testing the software without any inside knowledge about source code, architecture or internal design. This is most likely the case with any app you may test on The Beta Family.

White – the insider
White box testing, on the other and, is mainly used by developers with access to the source code itself. Distinct code units can be separately tested by writing unit tests that assert expected output for given input data. These tests often strive to cover as many if statements and code branches as possible. The measurement of this is called code coverage, often defined as the percentage of code lines covered by test cases.

While the general consensus is that developers theoretically would like 100 % code coverage, it’s also recognized that it’s realistically very hard to achieve. It would require defining the expected results for all combinations of input, variables and states. An advantage however is the re-usability of unit tests, which makes them very suitable for automation. At this point I should also point out that there are other types of white box tests, even those that are based on access to documentation rather than actual source code.

Black – testing the functionality
Let us move the focus back to black box testing, as this is more relevant to tests and testers on The Beta Family. When testing from a black box perspective much revolves around the functionality itself. As Wikipedia defines the technique: “The tester is aware of what the software is supposed to do but is not aware of how it does it”.

So if you’re an app developer publishing your app for testing on The Beta Family, it’s extremely helpful for the tester if you attach some kind of documentation over expected functionality. Without it you’re essentially relying on the tester’s personal opinion of how the app should work. This may be good or bad: it could produce more “bug” reports than necessary, but also give you unbiased first impressions.

Techniques and tricks within black box testing
Depending on the size and complexity of the app you are testing, it may not be viable to evaluate the expected outcome of every possible scenario. Luckily there are a number of techniques and tricks that with high probabilities help to identify possible bugs and errors.

Sanity check – Testers use their common sense to infer correct behavior and output. For example: a calendar event should never end before its start date, a product that is out of stock can not be purchased and a salary should never be negative.

State transition analysis
– Given the current state of an object, which actions should be possible to perform? If you are testing a turn-based game and it’s the opponent’s turn, can you still perform your move? If you’re writing an email and have not yet filled in the recipient field, should the Send button be activated?

Equivalence partitioning
– Test case input are grouped by similarity to guarantee that every input class is tested at least once. An alarm app may be set to active for weekdays (Monday through Friday) while allowing sleeping in on weekends (Saturday and Sunday). While one certainly could create a total of seven test cases, it is highly probable that two will suffice.

Boundary value analysis – The critical edges of a numeric range are used as input in order to verify the most extreme cases. If we consider a calendar app used to book a certain date, the month number should range between 1 to 12. The smallest and largest valid value should be tested, but also the invalid edge cases. Boundary value analysis could be seen as a special application of equivalence partitioning; as for instance any number higher than 13 likely yields the same result.

Stress testing and Error guessing
– Testers use their experience from other software to figure out where errors likely reside and force errors. We briefly discussed this in the end of our previous blog post called How to achieve better testing.

The above are just a few of the testing techniques that exist, but a very good start to help testers get into a systematic and informed approach. More techniques and term definitions are available in our software testing glossary, that’s based on the glossary from the International Software Testing Qualifications Board (ISTQB). We’ll continue to discuss glossary terms in upcoming articles and also providing some tips for those interested in taking the exam to become a certified software tester.


Peter Skogsberg
Peter Skogsberg is an ISTQB certified software tester and holds a Master’s degree in Information Technology. He has been on both the development and the testing sides of several mobile and web applications.

How to achieve better testing

Here I explain why our tester members are sure to be motivated and describe some good practices for app testing. Developers will get a few tips in how to design tasks and formulate their instructions to testers.

Motivation is key
It’s of course a truism that in whatever task you pursue, motivation is key for achieving good results. Testers on The Beta Family are sure to be motivated as they get the chance to try cool new apps before the general public and have the incentive to become a top tester and get more test invites, earning them more cash. So not only will the testers file bug reports, they’ll actually be encouraged to provide constructive input and fresh ideas.

A test report from The Beta Family can be very useful feedback for a developer, and a tester can feel proud about their contribution to a better app. But for the outcome to be successful the test instructions and objectives must be clear to both parties. It especially helps the tester if you as a developer emphasize where you want the test report focus to be: functionality, layout, or even suggestions for additional features?

Ask and thou shall receive
While software testing in general isn’t very open for creative suggestions about content or functionality, The Beta Family is. Besides from just asking your testers to report bugs and issues you are free to make the most creative use of our testers. Ask open questions and receive more feedback. Try to avoid any yes or no questions, while instead asking for detailed opinions.

Helpful assets to provide the testers include some kind of navigational blueprint, for instance a checklist of all the screens or views of the app, or a few functional scenarios you want the tester to try and complete. This ensures that the user actually sees and evaluates the entire app and functionality. The Beta Family” href=”” target=”_blank”>Remember to steer clear from the most common app mistakes.

Explore and speak your mind
App testing on The Beta Family is normally black box, which means the tester doesn’t have, or need, any knowledge of how the app works internally. In these cases a technique called exploratory testing¹ is often suitable. Simply put, it’s a testing methodology where the tester spontaneously learns about how to use and test the app by playing around with it.

One of the upsides with this technique is the ease of getting started, the tester doesn’t necessarily need an initial testing plan. Their experience and intuition from using similar apps allow them to use deductive reasoning to infer app usage and where to look for potential issues or bugs.

As black box testing is very much trial- and-error in its initial stages, the developer would be interested to see potential misconceptions and difficulties with the app. Here is where the SuperRecorder SDK² feature of the Beta Family can help. Test sessions are recorded from the actual device, along with the tester’s microphone sound, allowing them at all times to voice their train of thought.

Example of the SuperRecorder in action.

As a tester recording a session, be sure to speak your mind continuously throughout the testing session. Any feedback is useful, especially about things that catch you off guard: bugs, navigational inconsistencies, textual spelling or grammar errors, etc. And don’t be afraid to make comparisons against other similar apps, in the real-world app market there will be competition.

Five good testing practices
Here are five practices that will make you a better member of The Beta Family.

1. Be mean to the app. Use your imagination and try to force errors. What happens if we disable the WiFi or cell connection during a download in the app? What about rotating the screen from portrait to l andscape? Can I write letters into this field that expects numeric input?

2. Describe expected outcome. If you think you’ve found a bug, be sure to explain why you think so by describing what actually happened, versus what you thought would happen.

3. Try to reproduce. Any bug encountered should be accompanied in the report with steps for reproduction, whenever possible. “When opening the left-h and side swipe menu and selecting Settings, the app freezes and eventually crashes without any error message”. Any developer will tell you that this is extremely helpful for debugging.

4. Be honest. As a tester on The Beta Family you have multiple roles: software tester first, but also a test pilot of the potential product. Would you ever use the app in a real-world scenario? Would you consider paying for it? What would it take for you to do so?

4. Be constructive but never rude. You may well think the app sucks and should be put to sleep, but keep rude comments out of your report. Remember that you will receive feedback and a rating from the developer, so be constructive and describe ways to improve it instead. It’s surely more appreciated and better way to become a top tester and make more money on The Beta Family.


Peter Skogsberg
Peter Skogsberg is an ISTQB certified software tester and holds a Master’s degree in Information Technology. He has been on both the development and the testing sides of several mobile and web applications. View all posts by Peter Skogsberg

App testing and The Beta Family

This is the first post in a monthly series of articles on software testing. We intend to cover testing in general, but especially targeted at mobile apps. Developers and testers alike will find useful testing tips and techniques.

Why test mobile apps?
If you were asked to describe the main purpose of software testing you would probably think of finding and fixing bugs and to verify intended functionality and behavior. Quality assurance, simply put. Those are all valid points, but more interestingly, it can help your app to stand out against its competition and ultimately even increase profit.