Workflows are inflexible—Here are 5 easy ways to make them flexible and smart

Workflows are inflexible—Here are 5 easy ways to make them flexible and smart

Workflows for assembly, service and repair are often static and don’t consider the worker’s actual situation, experience and need. It doesn’t have to be like that. Here’s how to build smarter workflows by using workers' live input and feedback.

Imagine an employee has to perform a repair. Most of the time, the worker grabs a paper manual or maybe a PDFat best, even a manual with augmented reality visualization. The first step is to find the corresponding page in the paper-based manual or to open the app with the instructions.

Regardless of prior knowledge or the actual environment, the worker follows predefined instructions through assembly, repair or training. This can very quickly lead to frustration and thus to mistakes. It is simply not the natural way we think and work.

Fortunately, there is another way. Over the years we have analyzed many use cases to understand what workers need to be productive. Augmented reality allows interactive guidance and training and we show five examples of how AR applications can be made more flexible and efficient.

1. Provide instructions based on the worker’s needs 

Not every job is the same. Not every worker needs the same thing. Nevertheless, most instructions are not flexible enough but offer rigid processes.

With live user input, instructions can be personalized to always direct the worker to the right place and task in the visual guide. Based on the experience of the worker, the needs for training or the current situation, the AR guidance itself jumps to the right steps.

This saves time and is a much better user experience for workers. Training and workflows become more personalized, employees feel more involved and motivated and, of course, have more fun at work.

 2. Ensure that the worker checks and confirms specific steps 

Employees are interrupted, instructions are unclear, unnecessary mistakes are the result.

Modern work augmentation platforms reduce this risk with queries and checks while the task or training is being completed. This is done by adaptive workflows with interactive buttons, knowledge queries or the possibility to get further information.

Typical scenarios are to check and confirm current states, errors or data. This can happen via text or as a visual check. Depending on the input of the worker, further steps can be integrated or a live call with an expert or support staff can be initiated.

User and Photo Input in REFLEKT ONE

User Input combined with AR content makes workflows smarter

 

3. Give the worker the best-suited content 

Not every technician or trainee has the same level of knowledge. Some work better with 3D and text, others prefer 2D and live support.

With the help of the user input function this can be easily solved. Depending on the needs of the worker, content can be provided in different forms. From a direct 3D view in a live image "directly on the machine" or a virtual vieweach combined with text, video or imagesto online content, everything is available. The users receive the appropriate content based on their own preferences.

4.  Allow your workers to provide feedback and documentation 

Typical instructions are one-way streets. Users receive information and are only able to consume it.

Photo and user input turn the one-way street into a highway with entrances and exits. Employees can document their feedback, new ideas or comments on the instructions and training directly in the AR application.

To do this, employees can take photos and add more information. The documentation takes place directly in the instructions or in the training scenario. This helps to permanently improve the applications with the feedback of the users and to document critical errors.

 5. Understand the worker’s needs better and improve the instructions (report)

Rigid instructions and paper manuals make it difficult to process the information provided by users. The input of the user can usually not be stored and evaluated.

All data from user and photo input is stored in the REFLEKT ONE application and can be retrieved as individual information as well as a clear report. Whether documentation or the answers to questions as well as the checks are ready for evaluation.

These data are not available with paper manuals. For productive workflows and trainings it is essential to understand which employees have made which choices and which content has actually been used. User and photo input is a treasure trove of new data.

What does User & Photo Input look like in the application?

Watch the video with our Solutions Expert Andreas Lay who talked to hundreds of users before building the Photo and User Input functionalities. It is a sneak peak of how user input turns rigid instructions into dynamic workflows and trainings.

 


Dirk Schart is the Chief Marketing Officer and leads REFLEKT's US business based out of San Francisco. He has more than 10 years of experience in B2B marketing and Augmented Reality. Dirk is the author of the book "Augmented and Mixed Reality" and writes about tech and enterprise marketing.