There are four ways to manage jobs in vWork when multiple workers are involved.
Each approach has strengths and weaknesses. These are detailed below.
1. Have one job and move it between workers as needed.
This approach is used by a number of customers who have multi-stage jobs. A good example is a tree trimming company where the job goes through a scoping process with an advance team, is then assigned to the trimming team, and is finally re-assigned to a chipping and cleanup team. This works well and is the simplest approach.
The downside is that there is no ability to schedule the 2nd and 3rd stage for a specific time.
On the schedule, the paused job is displayed where it was started in the schedule until it is unpaused by the 2nd worker.
Via the API, this is one job. We can display who the “current” assigned worker is, all step completion times, as well as any pause events. We cannot report on who the historical workers were via the API.
This is a good approach if your primary goal is solving problems related to scheduling jobs, getting real-time visibility of job status, and keeping customers informed of progress, as well as recording how much time was spent on the job in total and when.
It is not the best solution if you are trying to record individual worker time for timesheeting purposes.
2. Have one job with multiple workers assigned at once.
This approach is commonly used when there is a primary worker and a secondary worker (usually an apprentice or junior). The primary worker is given the full editable copy of the job and an indication of who the secondary workers are.
The secondary worker is given a read-only copy of the job where they can see all the job information, and record their start and finish times.
This is actually two linked jobs in vWork and each can be scheduled individually. Each job can also be started, paused, and completed independently.
The primary job is given a normal vWork ID. The secondary job is given the same vWork ID, with an appended “-1”. It is possible to add multiple secondary workers. These are also created as separate but linked jobs with the same vWork ID and an appended -2, -3, etc.
Whilst it is not possible to create multi-worker linked jobs via the API, it is possible to create a primary job that a dispatcher can add secondary workers to. This is done by specifying a template name that exists in vWork and has Multi-worker enabled.
This is a good approach if you need to record time spent on the job per worker or if you need to schedule the 2nd or 3rd stage of the job independently of the first job.
This approach is not good if you need to get the secondary workers recording any information more complex than time spent on the job.
3. Have multiple jobs under a project.
The third option is to use vWork projects. Projects in vWork enable Administrators to define project workflows. A template is defined for the project. The project also defines what follow on jobs need to be created and scheduled and what information needs to be shared across these jobs.
A good example of how this could be used is for a UFB Fibre install. The main job “order” is created either via the API or the website. A Dispatcher can then convert this primary job into a project. The conversion would then create 4 separate but linked jobs for: inspection, install, blow and finally connection.
Each of these jobs may be done by different people at different times. There may be dependencies. Each of the sub-jobs may use a different template with their own steps and fields. Information supplied in one of the jobs can then be made available to other jobs.
The master “order” project is not considered complete until all 4 jobs are done.
The main downside of this approach is the complexity involved in setting up the workflow, but vWork staff are available to assist with this.
4. Have multiple jobs not in a project.
The last solution is basically option 3 but managed by a solution upstream of vWork. (Ie: a CRM or ERP creating different jobs in vWork for a project and linking them together by virtue of a common job number.) We have a number of customers doing this today and there are no downsides apart from needing to have an upstream solution (either master data or middleware) that can generate this information.