Not Particularly Meaningful
    Have ideas to improve npm?Join in the discussion! »

    TypeScript icon, indicating that this package has built-in type declarations

    2.0.3 • Public • Published

    Git Version NPM Version Last Commit License GitHub Status Checks Vulnerabilities

    Human Library

    AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,
    Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,
    Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation

    JavaScript module using TensorFlow/JS Machine Learning library

    • Browser:
      Compatible with both desktop and mobile platforms
      Compatible with CPU, WebGL, WASM backends
      Compatible with WebWorker execution
    • NodeJS:
      Compatible with both software tfjs-node and
      GPU accelerated backends tfjs-node-gpu using CUDA libraries

    Check out Live Demo app for processing of live WebCam video or static images

    • To start video detection, simply press Play
    • To process images, simply drag & drop in your Browser window
    • Note: For optimal performance, select only models you'd like to use
    • Note: If you have modern GPU, WebGL (default) backend is preferred, otherwise select WASM backend

    Note: Human Release 2.0 contains large list of changes, see Change log for details


    Project pages

    Wiki pages

    Additional notes

    See issues and discussions for list of known limitations and planned enhancements

    Suggestions are welcome!


    All options as presented in the demo application...


    Options visible in demo


    Face Close-up:

    Face under a high angle:

    Full Person Details:

    Pose Detection:

    Body Segmentation and Background Replacement:

    Large Group:

    Face Similarity Matching:
    Extracts all faces from provided input images,
    sorts them by similarity to selected face
    and optionally matches detected face with database of known people to guess their names


    Face Matching

    Face3D OpenGL Rendering:


    Face Matching

    468-Point Face Mesh Defails:
    (view in full resolution to see keypoints)


    Quick Start

    Simply load Human (IIFE version) directly from a cloud CDN in your HTML file:
    (pick one: jsdelirv, unpkg or cdnjs)

    <script src=""></script>
    <script src=""></script>
    <script src=""></script>

    For details, including how to use Browser ESM version or NodeJS version of Human, see Installation


    Human library can process all known input types:

    • Image, ImageData, ImageBitmap, Canvas, OffscreenCanvas, Tensor,
    • HTMLImageElement, HTMLCanvasElement, HTMLVideoElement, HTMLMediaElement

    Additionally, HTMLVideoElement, HTMLMediaElement can be a standard <video> tag that links to:

    • WebCam on user's system
    • Any supported video type
      For example: .mp4, .avi, etc.
    • Additional video types supported via HTML5 Media Source Extensions
      Live streaming examples:
      • HLS (HTTP Live Streaming) using hls.js
      • DASH (Dynamic Adaptive Streaming over HTTP) using dash.js
    • WebRTC media track using built-in support


    Example simple app that uses Human to process video input and
    draw output on screen using internal draw helper functions

    // create instance of human with simple configuration using default values
    const config = { backend: 'webgl' };
    const human = new Human(config);
    function detectVideo() {
      // select input HTMLVideoElement and output HTMLCanvasElement from page
      const inputVideo = document.getElementById('video-id');
      const outputCanvas = document.getElementById('canvas-id');
      // perform processing using default configuration
      human.detect(inputVideo).then((result) => {
        // result object will contain detected details
        // as well as the processed canvas itself
        // so lets first draw processed frame on canvas
        human.draw.canvas(result.canvas, outputCanvas);
        // then draw results on the same canvas
        human.draw.face(outputCanvas, result.face);
        human.draw.body(outputCanvas, result.body);
        human.draw.hand(outputCanvas, result.hand);
        human.draw.gesture(outputCanvas, result.gesture);
        // and loop immediate to the next frame

    or using async/await:

    // create instance of human with simple configuration using default values
    const config = { backend: 'webgl' };
    const human = new Human(config); // create instance of Human
    async function detectVideo() {
      const inputVideo = document.getElementById('video-id');
      const outputCanvas = document.getElementById('canvas-id');
      const result = await human.detect(inputVideo); // run detection
      human.draw.all(outputCanvas, result); // draw all results
      requestAnimationFrame(detectVideo); // run loop
    detectVideo(); // start loop

    or using interpolated results for smooth video processing by separating detection and drawing loops:

    const human = new Human(); // create instance of Human
    const inputVideo = document.getElementById('video-id');
    const outputCanvas = document.getElementById('canvas-id');
    let result;
    async function detectVideo() {
      result = await human.detect(inputVideo); // run detection
      requestAnimationFrame(detectVideo); // run detect loop
    async function drawVideo() {
      if (result) { // check if result is available
        const interpolated =; // calculate next interpolated frame
        human.draw.all(outputCanvas, interpolated); // draw the frame
      requestAnimationFrame(drawVideo); // run draw loop
    detectVideo(); // start detection loop
    drawVideo(); // start draw loop

    And for even better results, you can run detection in a separate web worker thread

    Default models

    Default models in Human library are:

    • Face Detection: MediaPipe BlazeFace - Back variation
    • Face Mesh: MediaPipe FaceMesh
    • Face Iris Analysis: MediaPipe Iris
    • Face Description: HSE FaceRes
    • Emotion Detection: Oarriaga Emotion
    • Body Analysis: MoveNet - Lightning variation

    Note that alternative models are provided and can be enabled via configuration
    For example, PoseNet model can be switched for BlazePose, EfficientPose or MoveNet model depending on the use case

    For more info, see Configuration Details and List of Models

    Human library is written in TypeScript 4.3
    Conforming to JavaScript ECMAScript version 2020 standard
    Build target is JavaScript EMCAScript version 2018

    For details see Wiki Pages
    and API Specification

    Stars Forks Code Size CDN
    Downloads Downloads Downloads


    npm i @vladmandic/human

    DownloadsWeekly Downloads






    Unpacked Size

    56.9 MB

    Total Files


    Last publish


    • avatar