Skip to content

SceneView/sceneview

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,155 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

SceneView

SceneView Logo

3D is just Compose UI.

SceneView brings 3D and AR into Jetpack Compose (Android) and SwiftUI (iOS, macOS, visionOS). Write a Scene { } the same way you write a Column { }. Nodes are composables. Lifecycle is automatic. State drives everything.

Sceneview ARSceneview Filament ARCore

Discord Open Collective

The idea

You already know how to build a screen:

Column {
    Text("Title")
    Image(painter = painterResource(R.drawable.cover), contentDescription = null)
    Button(onClick = { /* ... */ }) { Text("Open") }
}

This is a 3D scene — a photorealistic helmet, HDR lighting, orbit-camera gestures:

Scene(modifier = Modifier.fillMaxSize()) {
    rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
        ModelNode(modelInstance = instance, scaleToUnits = 1.0f, autoAnimate = true)
    }
    LightNode(apply = {
        type(LightManager.Type.SUN)
        intensity(100_000f)
        castShadows(true)
    })
}

Same pattern. Same Kotlin. Same mental model — now with depth.

No engine lifecycle callbacks. No addChildNode / removeChildNode. No onResume/onPause overrides. No manual cleanup. The Compose runtime handles all of it.


Platforms

Platform Renderer Framework Sample
Android Filament Jetpack Compose samples/android-demo
Android TV Filament Compose TV samples/android-tv-demo
iOS RealityKit SwiftUI samples/ios-demo
macOS RealityKit SwiftUI via SceneViewSwift
visionOS RealityKit SwiftUI via SceneViewSwift
Web Filament.js (WASM) Kotlin/JS + WebXR samples/web-demo
Desktop Software / Filament JNI Compose Desktop samples/desktop-demo
Flutter Filament / RealityKit PlatformView samples/flutter-demo
React Native Filament / RealityKit Fabric samples/react-native-demo

AR in 15 lines

var anchor by remember { mutableStateOf<Anchor?>(null) }

ARScene(
    modifier = Modifier.fillMaxSize(),
    planeRenderer = true,
    onSessionUpdated = { _, frame ->
        if (anchor == null) {
            anchor = frame.getUpdatedPlanes()
                .firstOrNull { it.type == Plane.Type.HORIZONTAL_UPWARD_FACING }
                ?.let { frame.createAnchorOrNull(it.centerPose) }
        }
    }
) {
    anchor?.let { a ->
        AnchorNode(anchor = a) {
            ModelNode(modelInstance = helmet, scaleToUnits = 0.5f)
        }
    }
}

When the plane is detected, anchor becomes non-null. Compose recomposes. AnchorNode enters the composition. The model appears — anchored to the physical world. When anchor is cleared, the node is removed and destroyed automatically. Pure Compose semantics, in AR.


What's new in 3.0

SceneView 3.0 is a ground-up rewrite around a single idea: 3D is just more Compose UI.

What changed What it means for you
Scene { } / ARScene { } content block Declare nodes as composables — no list, no add()
SceneScope / ARSceneScope DSL Every node type (ModelNode, AnchorNode, LightNode, ...) is @Composable
NodeScope trailing lambda Nest child nodes exactly like Column { } nests children
rememberModelInstance Async loading — returns null while loading, recomposes when ready
SceneNodeManager Internal bridge — Compose snapshot state drives the Filament scene graph
ViewNode Embed any Compose UI as a 3D billboard inside the scene
SurfaceType enum Choose SurfaceView (best performance) or TextureView (transparency)
All resources are remember Engine, loaders, environment, camera — Compose owns the lifecycle

See MIGRATION.md for a step-by-step upgrade guide from 2.x.

Table of Contents


3D with Compose

Installation

dependencies {
    implementation("io.github.sceneview:sceneview:3.3.0")
}

Quick start

Scene is a @Composable that renders a Filament 3D viewport. Think of it as a Box that adds a third dimension — everything inside its trailing block is declared with the SceneScope DSL.

@Composable
fun ModelViewerScreen() {
    val engine = rememberEngine()
    val modelLoader = rememberModelLoader(engine)
    val environmentLoader = rememberEnvironmentLoader(engine)

    // Loaded asynchronously — null until ready, then recomposition places it in the scene
    val modelInstance = rememberModelInstance(modelLoader, "models/damaged_helmet.glb")
    val environment = rememberEnvironment(environmentLoader) {
        environmentLoader.createHDREnvironment("environments/sky_2k.hdr")
            ?: createEnvironment(environmentLoader)
    }

    Scene(
        modifier = Modifier.fillMaxSize(),
        engine = engine,
        modelLoader = modelLoader,
        environment = environment,
        cameraManipulator = rememberCameraManipulator(),
        mainLightNode = rememberMainLightNode(engine) { intensity = 100_000.0f },
        onGestureListener = rememberOnGestureListener(
            onDoubleTap = { _, node -> node?.apply { scale *= 2.0f } }
        )
    ) {
        // ── Everything below is 3D Compose ─────────────────────────────────

        modelInstance?.let { instance ->
            ModelNode(modelInstance = instance, scaleToUnits = 1.0f, autoAnimate = true)
        }

        // Nodes nest exactly like Compose UI
        Node(position = Position(y = 1.5f)) {
            CubeNode(size = Size(0.2f), materialInstance = redMaterial)
            SphereNode(radius = 0.1f)
        }
    }
}

That's it. No engine lifecycle callbacks, no onResume/onPause overrides, no manual scene graph bookkeeping. The Compose runtime handles all of it.

SceneScope DSL reference

All composables available inside Scene { }:

Composable Description
ModelNode(modelInstance, scaleToUnits?) Renders a glTF/GLB model. Set isEditable = true to enable pinch-to-scale and drag-to-rotate.
LightNode(apply = { type(…); intensity(…) }) Directional, point, spot, or sun light. apply is a named parameter, not a trailing lambda.
CameraNode() Named camera (e.g. imported from a glTF)
CubeNode(size, materialInstance?) Box geometry
SphereNode(radius, materialInstance?) Sphere geometry
CylinderNode(radius, height, materialInstance?) Cylinder geometry
PlaneNode(size, normal, materialInstance?) Flat quad geometry
ImageNode(bitmap / fileLocation / resId) Image rendered on a plane
ViewNode(windowManager) { ComposeUI } Compose UI rendered as a 3D surface
MeshNode(primitiveType, vertexBuffer, indexBuffer) Custom GPU mesh
Node() Pivot / group node

Gesture sensitivityNode exposes scaleGestureSensitivity: Float (default 0.5). Lower values make pinch-to-scale feel more progressive. Tune it per-node in the apply block:

ModelNode(modelInstance = instance, isEditable = true, apply = {
    scaleGestureSensitivity = 0.3f   // 1.0 = raw, lower = more damped
    editableScaleRange = 0.2f..1.0f
})

Every node accepts an optional content trailing lambda — a NodeScope where child composables are automatically parented to the enclosing node:

Scene {
    Node(position = Position(y = 0.5f)) {    // NodeScope
        ModelNode(modelInstance = helmet)     // child of Node
        CubeNode(size = Size(0.05f))          // sibling, still a child of Node
    }
}

Async model loadingrememberModelInstance returns null while the file loads on Dispatchers.IO, then triggers recomposition. The node appears automatically when ready:

Scene {
    rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
        ModelNode(modelInstance = instance, scaleToUnits = 0.5f)
    }
}

Compose UI inside 3D spaceViewNode renders any composable onto a plane in the scene:

val windowManager = rememberViewNodeManager()

Scene {
    ViewNode(windowManager = windowManager) {
        Card {
            Text("Hello from 3D!")
            Button(onClick = { /* ... */ }) { Text("Click me") }
        }
    }
}

Reactive state — pass any State directly into node parameters. The scene updates on every state change with no manual synchronisation:

var rotationY by remember { mutableFloatStateOf(0f) }
LaunchedEffect(Unit) { while (true) { withFrameNanos { rotationY += 0.5f } } }

Scene {
    ModelNode(
        modelInstance = helmet,
        rotation = Rotation(y = rotationY)   // recomposes on every frame change
    )
}

Tap interactionisEditable = true enables pinch-to-scale, drag-to-move, and two-finger-rotate gestures on any node with zero extra code:

Scene(
    onGestureListener = rememberOnGestureListener(
        onSingleTapConfirmed = { event, node -> println("Tapped: ${node?.name}") }
    )
) {
    ModelNode(modelInstance = helmet, isEditable = true)
}

Surface type — choose the backing Android surface:

// SurfaceView — renders behind Compose layers, best GPU performance (default)
Scene(surfaceType = SurfaceType.Surface)

// TextureView — renders inline with Compose, supports transparency / alpha blending
Scene(surfaceType = SurfaceType.TextureSurface, isOpaque = false)

Samples

Sample What it shows
Model Viewer Animated camera orbit around a glTF model, HDR environment, double-tap to scale
glTF Camera Use a camera node imported directly from a glTF file
Camera Manipulator Orbit / pan / zoom camera interaction
Autopilot Demo Full animated scene built entirely with geometry nodes — no model files needed

AR with Compose

Installation

dependencies {
    // Includes sceneview — no need to add both
    implementation("io.github.sceneview:arsceneview:3.3.0")
}

Add to AndroidManifest.xml:

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera.ar" android:required="true" />

<application>
    <meta-data android:name="com.google.ar.core" android:value="required" />
</application>

Quick start

ARScene is Scene with ARCore wired in. The camera is driven by ARCore tracking. Everything else — anchors, models, lights, UI — is declared in the ARSceneScope content block. Normal Compose state decides what is in the scene.

var anchor by remember { mutableStateOf<Anchor?>(null) }

val engine = rememberEngine()
val modelLoader = rememberModelLoader(engine)
val modelInstance = rememberModelInstance(modelLoader, "models/helmet.glb")

ARScene(
    modifier = Modifier.fillMaxSize(),
    engine = engine,
    modelLoader = modelLoader,
    cameraNode = rememberARCameraNode(engine),
    planeRenderer = true,
    sessionConfiguration = { session, config ->
        config.depthMode =
            if (session.isDepthModeSupported(Config.DepthMode.AUTOMATIC))
                Config.DepthMode.AUTOMATIC
            else Config.DepthMode.DISABLED
        config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
        config.lightEstimationMode = Config.LightEstimationMode.ENVIRONMENTAL_HDR
    },
    onSessionUpdated = { _, frame ->
        if (anchor == null) {
            anchor = frame.getUpdatedPlanes()
                .firstOrNull { it.type == Plane.Type.HORIZONTAL_UPWARD_FACING }
                ?.let { frame.createAnchorOrNull(it.centerPose) }
        }
    }
) {
    // ── AR Compose content ───────────────────────────────────────────────────

    anchor?.let {
        AnchorNode(anchor = it) {
            // All SceneScope nodes are available inside AR nodes too
            modelInstance?.let { instance ->
                ModelNode(modelInstance = instance, scaleToUnits = 0.5f)
            }
        }
    }
}

The anchor drives state. When anchor changes, Compose recomposes and AnchorNode appears. When the anchor is cleared, the node is removed and destroyed automatically. AR state is just Kotlin state.

ARSceneScope DSL reference

ARScene { } provides everything from SceneScope plus:

Composable Description
AnchorNode(anchor) Follows a real-world ARCore anchor
PoseNode(pose) Follows a world-space pose (non-persistent)
HitResultNode(xPx, yPx) Auto hit-tests at a screen coordinate each frame
HitResultNode { frame -> hitResult } Custom hit-test lambda
AugmentedImageNode(augmentedImage) Tracks a detected real-world image
AugmentedFaceNode(augmentedFace) Renders a mesh aligned to a detected face
CloudAnchorNode(anchor) Persistent cross-device anchor via Google Cloud
TrackableNode(trackable) Follows any ARCore trackable
StreetscapeGeometryNode(streetscapeGeometry) Renders a Geospatial streetscape mesh

Augmented Images

ARScene(
    sessionConfiguration = { session, config ->
        config.augmentedImageDatabase = AugmentedImageDatabase(session).also { db ->
            db.addImage("cover", coverBitmap)
        }
    },
    onSessionUpdated = { _, frame ->
        frame.getUpdatedTrackables(AugmentedImage::class.java)
            .filter { it.trackingState == TrackingState.TRACKING }
            .forEach { detectedImages += it }
    }
) {
    detectedImages.forEach { image ->
        AugmentedImageNode(augmentedImage = image) {
            ModelNode(modelInstance = rememberModelInstance(modelLoader, "drone.glb"))
        }
    }
}

Augmented Faces

ARScene(
    sessionFeatures = setOf(Session.Feature.FRONT_CAMERA),
    sessionConfiguration = { _, config ->
        config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D
    },
    onSessionUpdated = { session, _ ->
        detectedFaces = session.getAllTrackables(AugmentedFace::class.java)
            .filter { it.trackingState == TrackingState.TRACKING }
    }
) {
    detectedFaces.forEach { face ->
        AugmentedFaceNode(augmentedFace = face, meshMaterialInstance = faceMaterial)
    }
}

Geospatial Streetscape

ARScene(
    sessionConfiguration = { _, config ->
        config.geospatialMode = Config.GeospatialMode.ENABLED
        config.streetscapeGeometryMode = Config.StreetscapeGeometryMode.ENABLED
    },
    onSessionUpdated = { _, frame ->
        geometries = frame.getUpdatedTrackables(StreetscapeGeometry::class.java).toList()
    }
) {
    geometries.forEach { geo ->
        StreetscapeGeometryNode(streetscapeGeometry = geo, meshMaterialInstance = buildingMat)
    }
}

Samples

Sample What it shows
AR Model Viewer Tap-to-place on detected planes, model picker, animated reticle, pinch-to-scale, drag-to-rotate
AR Augmented Image Overlay content on detected real-world images
AR Cloud Anchors Host and resolve persistent cross-device anchors
AR Point Cloud Visualise ARCore feature points
Autopilot Demo Autonomous AR scene driven entirely by Compose state

Apple platforms (iOS, macOS, visionOS)

SceneView is available for all Apple platforms via the SceneViewSwift package — a native Swift Package built on SwiftUI and RealityKit with 17 node types. Same concepts as Android (declarative scene building, model loading, gesture controls) using native Apple frameworks.

Supported: iOS 17+ · macOS 14+ · visionOS 1+

Node types: ModelNode · GeometryNode (cube, sphere, cylinder, cone, plane) · MeshNode · LightNode · CameraNode · TextNode · BillboardNode · ImageNode · VideoNode · LineNode · PathNode · PhysicsNode · DynamicSkyNode · FogNode · ReflectionProbeNode · AugmentedImageNode · AnchorNode

// Package.swift
dependencies: [
    .package(url: "https://github.com/SceneView/SceneViewSwift.git", from: "3.3.0")
]
SceneView { root in
    let model = try? await ModelNode.load("helmet.usdz")
    model?.scaleToUnits(1.0)
    if let model { root.addChild(model.entity) }
}
.environment(.studio)
.cameraControls(.orbit)

AR on iOS:

ARSceneView(
    planeDetection: .horizontal,
    onTapOnPlane: { position, arView in
        let cube = GeometryNode.cube(size: 0.1, color: .blue)
        let anchor = AnchorNode.world(position: position)
        anchor.add(cube.entity)
        arView.scene.addAnchor(anchor.entity)
    }
)

See the SceneViewSwift/ directory for the full library, demo app, and documentation. For a step-by-step guide, see the iOS Quickstart.

Cross-framework iOS support

SceneViewSwift is the native Apple rendering layer, consumable by any iOS framework:

Framework Integration
Swift native import SceneViewSwift via SPM
Flutter PlatformView wrapping SceneView/ARSceneView
React Native Turbo Module / Fabric component
KMP Compose UIKitView in Compose iOS

Architecture: native renderer per platform

Each platform uses its native rendering engine. Shared logic (math, collision, geometry, animations) lives in sceneview-core via Kotlin Multiplatform.

sceneview-core (KMP) — shared algorithms (math, collision, geometry, physics)
├── sceneview (Android)     — Filament renderer, Jetpack Compose
├── arsceneview (Android)   — ARCore integration
├── SceneViewSwift (Apple)  — RealityKit renderer, SwiftUI
├── sceneview-web (Web)     — Filament.js (WASM), Kotlin/JS
└── sceneview-desktop (JVM) — Filament JNI, Compose Desktop (scaffold)

All supported platforms

Platform Renderer Framework Status
Android Filament Jetpack Compose Stable (v3.3.0)
Android TV Filament Compose TV Alpha
iOS RealityKit SwiftUI Alpha (v3.3.0)
macOS RealityKit SwiftUI Alpha (v3.3.0)
visionOS RealityKit SwiftUI Alpha (v3.3.0)
Web Filament.js (WASM) Kotlin/JS Alpha
Flutter Filament / RealityKit PlatformView Alpha
React Native Filament / RealityKit Fabric Alpha
Desktop Filament JNI Compose Desktop Scaffold

Kotlin Multiplatform (sceneview-core)

The core math, collision, geometry, animation, and physics modules are shared across Android and Apple platforms via Kotlin Multiplatform in sceneview-core/. This includes Vector3, Quaternion, Ray, Box, Sphere, Earcut, Delaunator, spring animations, and more.

Platform parity

Feature Android iOS / macOS / visionOS
3D scene composable Scene { } SceneView { }
AR scene ARScene { } ARSceneView(...) (iOS only)
Model loading glTF/GLB USDZ
Procedural geometry CubeNode, SphereNode, CylinderNode, PlaneNode GeometryNode (cube, sphere, cylinder, cone, plane)
Custom mesh MeshNode MeshNode
Text TextNode TextNode
Billboards BillboardNode BillboardNode
Lines / paths LineNode LineNode, PathNode
Images ImageNode ImageNode
Video -- VideoNode
Lighting LightNode LightNode (directional, point, spot)
Camera CameraNode CameraNode
Orbit camera rememberCameraManipulator() .cameraControls(.orbit)
Environment/HDR rememberEnvironment() .environment(.studio)
Gesture editing isEditable = true Drag/pinch/tap built-in
Physics PhysicsNode PhysicsNode
Dynamic sky DynamicSkyNode DynamicSkyNode
Fog -- FogNode
Reflections -- ReflectionProbeNode
Augmented images AugmentedImageNode AugmentedImageNode
Face tracking AugmentedFaceNode --
Cloud anchors CloudAnchorNode --
Renderer Google Filament Apple RealityKit
AR framework Google ARCore Apple ARKit
Desktop -- macOS 14+
Spatial computing -- visionOS 1+

Resources

Documentation

Community

Related Projects

Support the project

SceneView is open-source and community-funded.

About

The #1 Android 3D & AR SDK — Jetpack Compose composables powered by Google Filament and ARCore. Drop-in Scene{} and ARScene{} for model viewing, AR placement, and immersive experiences. Successor to Google Sceneform.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages