Getting Started
A minimal SDK documentation for integrating our services.
Introduction
Build an AR workflow in React Native by combining our Roboflow-powered vision module with an AR viewer. Capture AR screenshots, run object detection, place 3D content at detected positions, and measure real-world distances by tapping objects.
Quick Start
To get started, you'll need to install our package.
Initialization
Once installed, initialize the SDK with your API key.
import { useEffect, useState } from "react";
import Roboflow from "react-native-cogni-vision-rnroboflow";
export function useRoboflowReady() {
const [ready, setReady] = useState(false);
useEffect(() => {
(async () => {
await Roboflow.initialize(
"YOUR_WORKSPACE_API_KEY",
"YOUR_WORKSPACE_NAME",
"YOUR_MODEL_ID",
"YOUR_MODEL_VERSION"
);
await Roboflow.loadModel();
})();
}, []);Requirements
- Use a physical device (ARKit/ARCore is not supported on most simulators).
- iOS: ARKit-capable device. Android: ARCore-capable device.
- Expo: use a custom dev client / prebuild (these are native modules).
Permissions
The AR viewer needs camera access.
<key>NSCameraUsageDescription</key>
<string>We use the camera to power AR and object detection.</string><uses-permission android:name="android.permission.CAMERA" />Render the AR Viewer
Mount ArViewerView with a ref so you can call takeScreenshot(), getPositionVector3(), placeModel(), and createLineAndGetDistance().
import { useRef } from "react";
import { View } from "react-native";
import { ArViewerView } from "react-native-ar-viewer";
const arRef = useRef<ArViewerView>(null);
export function ARScreen({ modelUri }: { modelUri?: string }) {
return (
<View style={{ flex: 1 }}>
<ArViewerView
ref={arRef}
model={modelUri} // must be a file:// URI
style={{ flex: 1 }}
manageDepth
allowRotate
allowScale
allowTranslate
disableInstantPlacement
onStarted={() => arRef.current?.loadModel?.()}
onUserTap={(e) => {
const coords = e?.nativeEvent?.coordinates;
if (!coords) return;
// coords.x / coords.y are screen-space coordinates
}}
planeOrientation="both"
/>
</View>
);
}Load a 3D Model (local file)
The AR viewer expects a local model path. A common pattern is to download a model once and keep it in RNFS.DocumentDirectoryPath. Android typically uses .glb; iOS typically uses .usdz.
import { useEffect, useState } from "react";
import { Platform } from "react-native";
import RNFS from "react-native-fs";
export function useLocalModelUri() {
const [uri, setUri] = useState<string>();
useEffect(() => {
(async () => {
const modelSrc =
Platform.OS === "android"
? "https://.../model.glb"
: "https://.../model.usdz";
const ext = Platform.OS === "android" ? "glb" : "usdz";
const dst = `${RNFS.DocumentDirectoryPath}/model.${ext}`;
if (!(await RNFS.exists(dst))) {
await RNFS.downloadFile({ fromUrl: modelSrc, toFile: dst }).promise;
}
setUri("file://" + dst);
})();
}, []);
return uri;
}Putting It Together (AR + Roboflow)
The core loop is takeScreenshot → detectObjects → getPositionVector3 → render.
const scan = async () => {
arRef.current?.reset();
const screenshot = await arRef.current?.takeScreenshot();
if (!screenshot) return;
const { predictions } = await Roboflow.detectObjects(screenshot);
for (const det of predictions) {
const p = await arRef.current?.getPositionVector3(det.x, det.y);
if (!p) continue;
arRef.current?.placeModel(p.x, p.y, p.z);
}
};Overview
The React Native SDK is designed around a simple loop: render an AR session, capture a frame, run detection, then convert 2D detection coordinates into 3D world positions for placement and measurement.
- One client, consistent results across platforms.
- Configurable performance modes for latency or accuracy.
- Structured responses for analysis results.
Authentication
Roboflow inference requires an API key. Keep it out of source control and load it from env/config. Initialize once with your API key, then choose a published model via loadModel(projectSlug, version).
Endpoints
Roboflow
Initialize once, load a model, then run detection on AR screenshots.
await Roboflow.initialize("YOUR_ROBOFLOW_API_KEY");
await Roboflow.loadModel("YOUR_PROJECT_SLUG", 1);
const result = await Roboflow.detectObjects(screenshot);
// result.predictions: Prediction[]ArViewerView (ref)
Imperative helpers for capturing frames, raycasting into 3D, and rendering overlays.
const screenshot = await arRef.current?.takeScreenshot();
const p = await arRef.current?.getPositionVector3(x, y);
arRef.current?.placeModel(p.x, p.y, p.z);
arRef.current?.placeText(p.x, p.y, p.z, "#FF0000", "Selected");
const meters = await arRef.current?.createLineAndGetDistance(p1, p2, "#FF0000");
arRef.current?.reset();Basic Usage
Analysis syntax for the SDK.
// 1) Capture a frame from the AR view
const screenshot = await arRef.current?.takeScreenshot();
if (!screenshot) return;
// 2) Run detection on the screenshot
const detections = await Roboflow.detectObjects(screenshot);
// 3) Convert detection centers (2D) to world positions (3D) and place content
for (const det of detections.predictions) {
const p = await arRef.current?.getPositionVector3(det.x, det.y);
if (!p) continue;
arRef.current?.placeModel(p.x, p.y, p.z);
}Advanced Scenarios
Handling progress updates and callbacks.
// Tap workflow: tap to select, tap again to measure (between centers).
const didUserTapObject = (tapX, tapY, box) => {
const left = box.x - box.width / 2;
const right = box.x + box.width / 2;
const top = box.y - box.height / 2;
const bottom = box.y + box.height / 2;
return tapX >= left && tapX <= right && tapY >= top && tapY <= bottom;
};
const onUserTap = async (coords) => {
const screenshot = await arRef.current?.takeScreenshot();
if (!screenshot) return;
const { predictions } = await Roboflow.detectObjects(screenshot);
const hit = predictions.find((p) => didUserTapObject(coords.x, coords.y, p));
if (!hit) return;
// 1st tap: select and annotate
if (!selectedObject) {
setSelectedObject(hit);
const p = await arRef.current?.getPositionVector3(coords.x, coords.y);
if (p) arRef.current?.placeText(p.x, p.y, p.z, "#FF0000", "Selected");
return;
}
// 2nd tap: measure distance between 2D detection centers projected into 3D
const p1 = await arRef.current?.getPositionVector3(selectedObject.x, selectedObject.y);
const p2 = await arRef.current?.getPositionVector3(hit.x, hit.y);
if (!p1 || !p2) return;
const meters = await arRef.current?.createLineAndGetDistance(p1, p2, "#FF0000");
console.log("📏 Distance (m):", meters);
};