FAQ: ZapBox: "Mixed Reality" for $30
For the most up-to-date information about the status of our project, check our project updates on Kickstarter!

What if I want to mix & match pledges?

Great question. The way Kickstarter is geared up allows each backer to select one pledge level and also to set their personal pledge amount.

While we think our current pledge tiers cover the majority of cases, the combination of a bundle of multiple ZapBox kits combined with early developer access is not currently covered.

If that describes what you want then please pledge at the bundle level (which will get you a nice multiple-unit discount), add $10 to your pledge, and send us a message to let us know. We’ll then include the additional benefits from the developer bundle tier with your reward.

We do need to limit the number of backers with early access, so we can only offer one set of early access benefits per backer, even if you’re pledging for multiple units.

Last updated: November 16, 2016 10:49

Which smartphones are supported?

For an optimal experience, we recommend a screen size of at least 4.5 inches with a resolution of at least 720p. The device must also have a gyroscope sensor. Although ZapBox would work on other devices, a smaller screen impacts the field of view that can be displayed in the headset and so offers a sub-optimal ZapBox experience.

ZapBox supports iPhone 6 or later, excluding the iPhone SE. On Android the sheer number of devices means it’s impossible to give an exhaustive list of supported phones, but it is possible to give some guidance. ZapBox requires Android 4.1 or later, but we would recommend Android 5.0 or later as they expose more manual controls for the camera.

If you’re able to download the “Google Cardboard” app from the Play Store and the “Cardboard Demos” content renders smoothly you should be good to go.

Last updated: November 16, 2016 10:49

Isn’t this just a Google Cardboard?

Take a look at the comparison chart on the page to see the differences.

The unique part about ZapBox is the additional cardboard components and the underlying software means it offers significantly more guaranteed functionality over the existing Google Cardboard ecosystem which will empower developers to create exciting new content experiences that are not possible with Cardboard alone.

With Cardboard the only guaranteed means of input is head rotation - many headsets don’t have any form of input button. ZapBox on the other hand offers full 3D tracking of both the user and a pair of handheld controllers, along with integrated MR rendering capabilities.

Last updated: November 16, 2016 10:49

Is the headset Google Cardboard compatible?

The headset is fully compatible with any other Google Cardboard apps so if you’ve yet to try out existing mobile VR, ZapBox is a great way to do that too.

Last updated: November 16, 2016 10:49

I’ve already got a Google Cardboard or other VR viewer, can I get a kit without the headset?

Many Google Cardboard or similar smartphone VR headsets don’t have openings for the camera. Even those that do allow camera access won’t typically support the wide angle lens adapter on the phone at the same time.

You’re of course welcome to modify any other headsets to accept your phone with the lens adapter and to use that with ZapBox, but in the interests of keeping the pledges simpler we thought it wasn’t worth adding a pledge level without the headset at all. You can always give it away to a friend if you really don’t want it, or let your gran experience a taste of VR :-)

Last updated: November 16, 2016 10:49

How come your live captures don’t show stereo rendering?

Never fear, the content will be rendered in stereo when in the headset. We thought it was less confusing to show it in non-stereo mode for the video (Magic Leap and HoloLens do the same for their videos).

It’s hard to get across in a video but the combination of stereo rendering and full 3D tracking of the user’s position delivers a really compelling experience.

Last updated: November 16, 2016 10:49

Don’t you need a see-through display for Mixed Reality?

There are two different ways to combine real and virtual scenes - optical see-through or video see-through.

HoloLens and Magic Leap use an optical see-through setup, where you view the real world directly and only the virtual content comes from the display. This gives a more natural view of the real world but does have some drawbacks; it’s harder to give a large field of view for virtual content, and showing completely solid content without dimming the entire real world view isn’t possible. It’s also challenging to keep the virtual content locked in place relative to the real world as the user moves around; it turns out the speed of light is pretty quick!

With ZapBox we take a video see-through approach; the user’s view of the real world comes from a live camera feed which is rendered in the background, with the virtual content is rendered on top. This offers a less natural view of the real world and is unlikely to be something you would want to wear all the time, but does have some benefits - it permits fully opaque content and a high field of view, and smooth transitions from MR to VR. It’s also easier to completely align the real world and virtual world as the user moves around.

Last updated: November 16, 2016 10:49

I’ve heard of Augmented Reality, how is “Mixed Reality” different?

If you think MR sounds a lot like Augmented Reality (AR), you’d be right. We’ve used the term Mixed Reality to describe ZapBox primarily because the experiences it delivers are similar to those shown on HoloLens and Magic Leap, and Mixed Reality is the term they use to describe their content. AR is popular on phones, but the more immersive, larger-scale experiences delivered through headsets feel different enough to warrant a new term.

If that all sounds a bit wishy-washy marketing speak to you, a more technical distinction between MR and AR can be made by considering the contextual relevance of the content in the real world environment. AR is all about context - Pokémon GO characters appear in relevant places in the world, and visual tracking enables things like maintenance instructions overlaid on a specific model of printer. In MR, the real world is used as a stage for virtual content but the context is less relevant. Content appears anchored in the real world to allow natural exploration and interaction, but it doesn’t really “belong” there - mini golf doesn’t really belong in our office, but it’s sure fun to play!

AR gives a great sense of connection between the real and virtual, but you need to go in search of the objects or places with associated content. With the MR approach of ZapBox the content can come to you! This makes browsing, sharing and experiencing content much easier for users. For hobbyist developers the MR paradigm allows content to be shared with the audience of other ZapBox users without them needing to print out or find specific “target images” for your content.

We’re convinced there’s a big future for both AR and MR, and ZapWorks is an ideal platform for creating content for either paradigm.

Last updated: November 16, 2016 10:49

This seems too good to be true. What’s the catch?

ZapBox really does tick all of those boxes in the comparison chart for $30.

Achieving such an affordable price-point meant avoiding any additional sensors or electronic components and limiting ourselves to the cameras and sensors available on existing smartphones.

While you shouldn’t be expecting ZapBox to offer an experience exactly on par with HoloLens, we’ve been able to make it 100 times cheaper! Our aim was not to make the perfect MR device, but to push the boundaries of what was possible with our existing smartphones and offer a genuinely affordable entry-point to this entirely new class of experiences.

Last updated: November 16, 2016 10:49

What about the latency from the camera feed?

The exact amount of latency from the camera is device-dependent. If you spin around quickly you will probably notice a delay between your motion and the rendered view.

It’s important to bear in mind that ZapBox offers a new class of experiences. With the 360-degree video content popular with Google Cardboard headsets the user is almost encouraged to spin around quickly in case they’re missing some cool content going on behind them, and that’s why there’s been a push to minimize motion-to-photon latency in the mobile VR.

ZapBox content is fully anchored in 3D and we find this encourages much slower user motion. The content tends to be more “outside in” where you walk around things of interest rather than “inside out” where there are things happening all around you. These “outside-in” experiences are perfectly suited to ZapBox and are much less latency-sensitive than 360-degree videos.

Last updated: November 16, 2016 10:49

So how can you render a stereo view of the world from a single camera?

Usually headsets offering video-see through AR or MR would feature a camera for each eye so the live feed can also be shown in stereo.

As ZapBox understands the world geometry from the map-building process we are able to use that knowledge to produce two different renders of the camera image, so that content does appear correctly anchored to the world in the stereo view.

Last updated: November 16, 2016 10:49

I’m a developer. Do I have to use ZapWorks Studio to make ZapBox content?

Initially ZapWorks Studio will be the only supported tool for publishing content for ZapBox. It supports importing assets made in other tools such as videos or animated 3D content (from tools like Blender, 3DS Max or Maya).

ZapWorks and the Zappar platform is perfectly suited to creating and publishing short-form experiences that don’t require app submissions for each piece of content. This provides a significant benefit both to users wanting to discover content and developers wanting to share what they’ve built. We believe that this integrated approach will allow a ZapBox ecosystem to develop and thrive.

Although theoretically possible to develop plugins to expose the underlying technology to other tools (eg Unity or Unreal) the resulting experiences would need to be distributed as standalone apps. This would offer significantly more friction to the user experience, and hence is not currently a priority.

Last updated: November 16, 2016 10:49

Will I be able to make money from my content?

Content shared through the ZapBox app will be available for free. The ZapBox app is not an “app store” - we wouldn’t be allowed to distribute it if it was.

It’s possible that in future you will be able to embed the ZapBox technology into standalone apps which you could submit to the app stores as a paid app if you want.

We expect the early days of ZapBox to be about smaller, bite-sized experiences as we collectively experiment with what can be done with the platform. The ability to easily share these experiences directly into the ZapBox app without needing to make separate app submissions should really encourage this experimentation and we’re really excited to see what you come up with.

Last updated: November 16, 2016 10:49

ZapWorks is free for personal use - what about commercial use?

ZapWorks has a seat-based licensing model for commercial use. You’ll need a “Pro Seat” for access to ZapWorks Studio to build ZapBox content. See https://zap.works for the details.

Generally, content will be available free to users, but if you are creating custom ZapBox MR experiences for events or giveaways, you will require a commercial subscription to ZapBox.

Last updated: November 16, 2016 10:49

Why not use SLAM for world tracking?

This might get technical…

SLAM stands for Simultaneous Localisation and Mapping. SLAM algorithms can determine the user’s position with reference to a map of “natural features” - things that are in the real world already - that it builds up as the user moves around. HoloLens uses SLAM to allow tracking without world markers.

In fact the “map building” part of ZapBox employs very similar math[s] to work out how the pointcodes are laid out. So why don’t we use SLAM in ZapBox?

When we first started thinking about ZapBox we had two core requirements - we wanted to support Mixed Reality experiences, so needed to show the camera feed, but we also wanted natural independently tracked interactions using handheld controllers. That combination means the controllers themselves have to be trackable when small in the image (otherwise they’d cover the view of the rest of the world and prevent true “MR” experiences).

We developed pointcodes as a new marker design that can be very efficiently detected on mobile hardware even when small in the image. A constellation of these markers on each controller allow us to accurately detect them from any angle and when only some of them are visible. The same basic concept is also used for our pointcode world tracking. Using the same approach for both is great for performance - a single pointcode detection step across the entire image is sufficient to track both the world and the controllers.

Monocular (single-camera) RGB SLAM also has certain limitations; scale is ambiguous, it is difficult to cope with other objects moving in front of the camera (such as people’s arms / controllers when interacting with content), and it requires sufficient image texture in the world.

The requirement for sufficient texture is especially challenging in smaller indoor home environments combined with low-FOV cameras on existing smartphones.

HoloLens is built from the ground up around SLAM-based world tracking. That’s why it has 3 wide-angle RGB cameras pointing in different directions and a depth camera (which presumably both helps tracking when close to a surface and allows an occlusion mask to be passed in to the SLAM algorithm to prevent arm motion disrupting SLAM). They also have some custom DSPs to process all that data.

In summary on mobile hardware we believe using pointcodes for world tracking is the right tradeoff between robustness, performance and ease-of-setup.

See the following question for where other natural image features may fit in…

Last updated: November 18, 2016 04:03

Do I need so many pointcodes in the world? They look quite dense in your demos.

As the videos are recorded live and to give a true impression of the performance that ZapBox offers, they represent the current state of our development.

Using the wide-angle lens adapter increases the field of view of the camera and hence reduces the required density of codes to robustly track the same area. There are also continuing improvements to both the tracking quality of individual codes, along with sensor fusion with device accelerometer and gyroscope, which we expect to provide solid tracking with fewer codes in view.

Another possibility that we will investigate as ZapBox gets closer to launch is employing other natural features in the image to constrain the user position, either through visual odometry or full SLAM. See the question above for why we didn’t start with SLAM and why we don’t consider that a suitable primary solution for robust world tracking in the ZapBox context.

Finally we are also planning to investigate blurring or in-filling the codes in the camera texture, in which case you won’t even see them in the experience itself (if they’re on a solid background…).

We think they look cool though :)

Last updated: November 18, 2016 04:01

Is this the most text ever written about some bits of cardboard?

Very probably.

Last updated: November 18, 2016 04:01

Who’s this William Ridgeway fellow?

We asked ourselves the same thing, as we’ve always known him as Jeff!

Jeff heads up our US operation and is the named person on the campaign so we meet Kickstarter’s requirements for US projects. This allows us to have all our pledges listed in US Dollars for the convenience of international backers.

Last updated: November 18, 2016 04:01

What if I want to mix & match pledges?

Great question. The way Kickstarter is geared up allows each backer to select one pledge level and also to set their personal pledge amount. While we think our current pledge tiers cover the majority of cases, the combination of a bundle of multiple ZapBox kits combined with early developer access is not currently covered. If that describes what you want then please pledge at the bundle level (which will get you a nice multiple-unit discount), add $10 to your pledge, and send us a message to let us know. We’ll then include the additional benefits from the developer bundle tier with your reward. We do need to limit the number of backers with early access, so we can only offer one set of early access benefits per backer, even if you’re pledging for multiple units. Last updated: Wed, Nov 16 2016 1:49 pm EST

Which smartphones are supported?

For an optimal experience, we recommend a screen size of at least 4.5 inches with a resolution of at least 720p. The device must also have a gyroscope sensor. Although ZapBox would work on other devices, a smaller screen impacts the field of view that can be displayed in the headset and so offers a sub-optimal ZapBox experience. ZapBox supports iPhone 6 or later, excluding the iPhone SE. On Android the sheer number of devices means it’s impossible to give an exhaustive list of supported phones, but it is possible to give some guidance. ZapBox requires Android 4.1 or later, but we would recommend Android 5.0 or later as they expose more manual controls for the camera. If you’re able to download the “Google Cardboard” app from the Play Store and the “Cardboard Demos” content renders smoothly you should be good to go. Last updated: Wed, Nov 16 2016 1:49 pm EST

Isn’t this just a Google Cardboard?

Take a look at the comparison chart on the page to see the differences. The unique part about ZapBox is the additional cardboard components and the underlying software means it offers significantly more guaranteed functionality over the existing Google Cardboard ecosystem which will empower developers to create exciting new content experiences that are not possible with Cardboard alone. With Cardboard the only guaranteed means of input is head rotation - many headsets don’t have any form of input button. ZapBox on the other hand offers full 3D tracking of both the user and a pair of handheld controllers, along with integrated MR rendering capabilities. Last updated: Wed, Nov 16 2016 1:49 pm EST

Is the headset Google Cardboard compatible?

The headset is fully compatible with any other Google Cardboard apps so if you’ve yet to try out existing mobile VR, ZapBox is a great way to do that too. Last updated: Wed, Nov 16 2016 1:49 pm EST

I’ve already got a Google Cardboard or other VR viewer, can I get a kit without the headset?

Many Google Cardboard or similar smartphone VR headsets don’t have openings for the camera. Even those that do allow camera access won’t typically support the wide angle lens adapter on the phone at the same time. You’re of course welcome to modify any other headsets to accept your phone with the lens adapter and to use that with ZapBox, but in the interests of keeping the pledges simpler we thought it wasn’t worth adding a pledge level without the headset at all. You can always give it away to a friend if you really don’t want it, or let your gran experience a taste of VR :-) Last updated: Wed, Nov 16 2016 1:49 pm EST

How come your live captures don’t show stereo rendering?

Never fear, the content will be rendered in stereo when in the headset. We thought it was less confusing to show it in non-stereo mode for the video (Magic Leap and HoloLens do the same for their videos). It’s hard to get across in a video but the combination of stereo rendering and full 3D tracking of the user’s position delivers a really compelling experience. Last updated: Wed, Nov 16 2016 1:49 pm EST

Don’t you need a see-through display for Mixed Reality?

There are two different ways to combine real and virtual scenes - optical see-through or video see-through. HoloLens and Magic Leap use an optical see-through setup, where you view the real world directly and only the virtual content comes from the display. This gives a more natural view of the real world but does have some drawbacks; it’s harder to give a large field of view for virtual content, and showing completely solid content without dimming the entire real world view isn’t possible. It’s also challenging to keep the virtual content locked in place relative to the real world as the user moves around; it turns out the speed of light is pretty quick! With ZapBox we take a video see-through approach; the user’s view of the real world comes from a live camera feed which is rendered in the background, with the virtual content is rendered on top. This offers a less natural view of the real world and is unlikely to be something you would want to wear all the time, but does have some benefits - it permits fully opaque content and a high field of view, and smooth transitions from MR to VR. It’s also easier to completely align the real world and virtual world as the user moves around. Last updated: Wed, Nov 16 2016 1:49 pm EST

I’ve heard of Augmented Reality, how is “Mixed Reality” different?

If you think MR sounds a lot like Augmented Reality (AR), you’d be right. We’ve used the term Mixed Reality to describe ZapBox primarily because the experiences it delivers are similar to those shown on HoloLens and Magic Leap, and Mixed Reality is the term they use to describe their content. AR is popular on phones, but the more immersive, larger-scale experiences delivered through headsets feel different enough to warrant a new term. If that all sounds a bit wishy-washy marketing speak to you, a more technical distinction between MR and AR can be made by considering the contextual relevance of the content in the real world environment. AR is all about context - Pokémon GO characters appear in relevant places in the world, and visual tracking enables things like maintenance instructions overlaid on a specific model of printer. In MR, the real world is used as a stage for virtual content but the context is less relevant. Content appears anchored in the real world to allow natural exploration and interaction, but it doesn’t really “belong” there - mini golf doesn’t really belong in our office, but it’s sure fun to play! AR gives a great sense of connection between the real and virtual, but you need to go in search of the objects or places with associated content. With the MR approach of ZapBox the content can come to you! This makes browsing, sharing and experiencing content much easier for users. For hobbyist developers the MR paradigm allows content to be shared with the audience of other ZapBox users without them needing to print out or find specific “target images” for your content. We’re convinced there’s a big future for both AR and MR, and ZapWorks is an ideal platform for creating content for either paradigm. Last updated: Wed, Nov 16 2016 1:49 pm EST

This seems too good to be true. What’s the catch?

ZapBox really does tick all of those boxes in the comparison chart for $30. Achieving such an affordable price-point meant avoiding any additional sensors or electronic components and limiting ourselves to the cameras and sensors available on existing smartphones. While you shouldn’t be expecting ZapBox to offer an experience exactly on par with HoloLens, we’ve been able to make it 100 times cheaper! Our aim was not to make the perfect MR device, but to push the boundaries of what was possible with our existing smartphones and offer a genuinely affordable entry-point to this entirely new class of experiences. Last updated: Wed, Nov 16 2016 1:49 pm EST

What about the latency from the camera feed?

The exact amount of latency from the camera is device-dependent. If you spin around quickly you will probably notice a delay between your motion and the rendered view. It’s important to bear in mind that ZapBox offers a new class of experiences. With the 360-degree video content popular with Google Cardboard headsets the user is almost encouraged to spin around quickly in case they’re missing some cool content going on behind them, and that’s why there’s been a push to minimize motion-to-photon latency in the mobile VR. ZapBox content is fully anchored in 3D and we find this encourages much slower user motion. The content tends to be more “outside in” where you walk around things of interest rather than “inside out” where there are things happening all around you. These “outside-in” experiences are perfectly suited to ZapBox and are much less latency-sensitive than 360-degree videos. Last updated: Wed, Nov 16 2016 1:49 pm EST

So how can you render a stereo view of the world from a single camera?

Usually headsets offering video-see through AR or MR would feature a camera for each eye so the live feed can also be shown in stereo. As ZapBox understands the world geometry from the map-building process we are able to use that knowledge to produce two different renders of the camera image, so that content does appear correctly anchored to the world in the stereo view. Last updated: Wed, Nov 16 2016 1:49 pm EST

I’m a developer. Do I have to use ZapWorks Studio to make ZapBox content?

Initially ZapWorks Studio will be the only supported tool for publishing content for ZapBox. It supports importing assets made in other tools such as videos or animated 3D content (from tools like Blender, 3DS Max or Maya). ZapWorks and the Zappar platform is perfectly suited to creating and publishing short-form experiences that don’t require app submissions for each piece of content. This provides a significant benefit both to users wanting to discover content and developers wanting to share what they’ve built. We believe that this integrated approach will allow a ZapBox ecosystem to develop and thrive. Although theoretically possible to develop plugins to expose the underlying technology to other tools (eg Unity or Unreal) the resulting experiences would need to be distributed as standalone apps. This would offer significantly more friction to the user experience, and hence is not currently a priority. Last updated: Wed, Nov 16 2016 1:49 pm EST

Will I be able to make money from my content?

Content shared through the ZapBox app will be available for free. The ZapBox app is not an “app store” - we wouldn’t be allowed to distribute it if it was. It’s possible that in future you will be able to embed the ZapBox technology into standalone apps which you could submit to the app stores as a paid app if you want. We expect the early days of ZapBox to be about smaller, bite-sized experiences as we collectively experiment with what can be done with the platform. The ability to easily share these experiences directly into the ZapBox app without needing to make separate app submissions should really encourage this experimentation and we’re really excited to see what you come up with. Last updated: Wed, Nov 16 2016 1:49 pm EST

ZapWorks is free for personal use - what about commercial use?

ZapWorks has a seat-based licensing model for commercial use. You’ll need a “Pro Seat” for access to ZapWorks Studio to build ZapBox content. See https://zap.works for the details. Generally, content will be available free to users, but if you are creating custom ZapBox MR experiences for events or giveaways, you will require a commercial subscription to ZapBox. Last updated: Wed, Nov 16 2016 1:49 pm EST

Why not use SLAM for world tracking?

This might get technical… SLAM stands for Simultaneous Localisation and Mapping. SLAM algorithms can determine the user’s position with reference to a map of “natural features” - things that are in the real world already - that it builds up as the user moves around. HoloLens uses SLAM to allow tracking without world markers. In fact the “map building” part of ZapBox employs very similar math[s] to work out how the pointcodes are laid out. So why don’t we use SLAM in ZapBox? When we first started thinking about ZapBox we had two core requirements - we wanted to support Mixed Reality experiences, so needed to show the camera feed, but we also wanted natural independently tracked interactions using handheld controllers. That combination means the controllers themselves have to be trackable when small in the image (otherwise they’d cover the view of the rest of the world and prevent true “MR” experiences). We developed pointcodes as a new marker design that can be very efficiently detected on mobile hardware even when small in the image. A constellation of these markers on each controller allow us to accurately detect them from any angle and when only some of them are visible. The same basic concept is also used for our pointcode world tracking. Using the same approach for both is great for performance - a single pointcode detection step across the entire image is sufficient to track both the world and the controllers. Monocular (single-camera) RGB SLAM also has certain limitations; scale is ambiguous, it is difficult to cope with other objects moving in front of the camera (such as people’s arms / controllers when interacting with content), and it requires sufficient image texture in the world. The requirement for sufficient texture is especially challenging in smaller indoor home environments combined with low-FOV cameras on existing smartphones. HoloLens is built from the ground up around SLAM-based world tracking. That’s why it has 3 wide-angle RGB cameras pointing in different directions and a depth camera (which presumably both helps tracking when close to a surface and allows an occlusion mask to be passed in to the SLAM algorithm to prevent arm motion disrupting SLAM). They also have some custom DSPs to process all that data. In summary on mobile hardware we believe using pointcodes for world tracking is the right tradeoff between robustness, performance and ease-of-setup. See the following question for where other natural image features may fit in… Last updated: Fri, Nov 18 2016 7:03 am EST

Do I need so many pointcodes in the world? They look quite dense in your demos.

As the videos are recorded live and to give a true impression of the performance that ZapBox offers, they represent the current state of our development. Using the wide-angle lens adapter increases the field of view of the camera and hence reduces the required density of codes to robustly track the same area. There are also continuing improvements to both the tracking quality of individual codes, along with sensor fusion with device accelerometer and gyroscope, which we expect to provide solid tracking with fewer codes in view. Another possibility that we will investigate as ZapBox gets closer to launch is employing other natural features in the image to constrain the user position, either through visual odometry or full SLAM. See the question above for why we didn’t start with SLAM and why we don’t consider that a suitable primary solution for robust world tracking in the ZapBox context. Finally we are also planning to investigate blurring or in-filling the codes in the camera texture, in which case you won’t even see them in the experience itself (if they’re on a solid background…). We think they look cool though :) Last updated: Fri, Nov 18 2016 7:01 am EST

Is this the most text ever written about some bits of cardboard?

Very probably. Last updated: Fri, Nov 18 2016 7:01 am EST

Who’s this William Ridgeway fellow?

We asked ourselves the same thing, as we’ve always known him as Jeff! Jeff heads up our US operation and is the named person on the campaign so we meet Kickstarter’s requirements for US projects. This allows us to have all our pledges listed in US Dollars for the convenience of international backers. Last updated: Fri, Nov 18 2016 7:01 am EST

Shipping Updates
For the most up-to-date information about the status of our project, check our project updates on Kickstarter!
No shipping updates are provided.
BackerKit FAQ

What is BackerKit?

BackerKit is a service that crowdfunded project creators use to keep track of hundreds to tens of thousands of backers—from shipping details, pledge levels, preferences and quantities, whether they have paid or had their card declined, special notes, and everything in between!

The BackerKit software and support team is independent from the campaign’s project team—BackerKit does not handle the actual reward shipping. For more information about the preparation or delivery status of your rewards, please check the project's updates page.

How does BackerKit work?

After the campaign ends, the project creator will send you an email with a unique link to your survey. You can check out a walkthrough of the process here.

I never received my invitation. How do I complete the survey?

The most common reasons for not receiving a survey email is that you may be checking an email inbox different from the email address you used to sign up with Kickstarter, Indiegogo or Tilt Pro account, or it may be caught in your spam filter.

Confirm that the email address you are searching matches the email address tied to your Kickstarter, Indiegogo, or Tilt Pro account. If that doesn’t work, then try checking your spam, junk or promotions folders. You can also search for "backerkit” in your inbox.

To resend the survey to yourself, visit the project page and input the email address associated with your Kickstarter, Indiegogo or Tilt Pro account.

How do I update my shipping address?

BackerKit allows you to update your shipping address until the shipping addresses are locked by the project creator. To update your address, go back to your BackerKit survey by inputting your email here.

When will my order be shipped, charged or locked?

That is handled directly by the project creator. BackerKit functions independently of the project itself, so we do not have control of their physical shipping timeline. If you want to check on the project’s status, we recommend reading over the project's updates page.

I completed the survey, but haven't received my rewards yet. When will they arrive?

As BackerKit does not actually handle any rewards or shipping, the best way to stay updated on the shipping timeline would be to check out the project's updates page.

Contact Us