Kinovea is a video annotation tool designed for motion analysis.
It features utilities to capture, slow down, compare, annotate and measure motion in videos.
For a single-page overview of the features of Kinovea you may consult the Features page on the website.
The sections below and the table of content in the sidebar should let you access the documentation for your topic of interest.
You can also use the search function in the top left corner.
The file explorer shows a tree view of the folders on your computer.
The bottom part of the tab shows a list of supported files found in the selected folder.
The shortcut explorer shows a tree view of folders that were bookmarked as favorites.
To add a shortcut to a folder in this list, right click the selected folder in the file explorer tab or in the shortcut tab and choose Add to shortcuts.
You can also use the Shortcut toolbar in the shortcut panel to add and remove shortcuts.
The thumbnails panel is displayed when there are no playback or capture screens loaded.
It shows thumbnails corresponding to the active tab of the explorer panel.
In the case of the file explorer and shortcut tabs, it displays thumbnails of the files in the selected folder.
In the case of the camera explorer it displays thumbnails of all the cameras connected to the computer.
Some metadata can be overlaid on the thumbnails of videos or images.
To configure which metadata is overlaid right click the background of the explorer to bring up the context menu.
Enabling the metadata Kva file will make it display “kva” on the thumbnail when annotations exist as a sidecar file to the main video.
The screens panel replaces the thumbnails panel when one or more screens are opened.
When two screens of the same type are opened, joint controls are added at the bottom.
Adding and removing screens can be done by using the main toolbar, the menu View or the close buttons on the screens themselves.
To swap the screens use the menu View ‣ Swap screens or the swap button in the joint controls.
You can save the current arrangement of screens as the default workspace by using the menu Options ‣ Workspace ‣ Save as default workspace.
The next time Kinovea starts it will read this workspace and reload the videos and cameras accordingly.
To delete the default workspace and make Kinovea starts normally on the thumbnail explorer use the menu Options ‣ Workspace ‣ Forget default workspace.
Workspaces can be exported to separate XML files using the menu Options ‣ Workspaces ‣ Export workspace.
To start Kinovea using an explicit workspace file it must be passed as an argument to the command line.
It is possible to run multiple Kinovea at the same time on the same computer.
This can be used to record more than two cameras, play more than two videos at the same time, or create more advanced setups for capture and replay or instrumentation scenarios.
Note
By default each instance has its own set of preferences separate from the others.
This behavior is controlled by the option under Preferences ‣ General ‣ Instances have their own preferences.
This option can only be changed from the first instance of Kinovea.
When this checkbox is checked it is possible to run multiple Kinovea at the same time. This can be used to record more than two cameras or play more than two videos at the same time.
Each instance is identified by a name or a number.
When this checkbox is checked each instance of Kinovea will maintain its own preferences, independently of all other instances.
This option is only available in the first instance started.
When this option is enabled and an image file is opened, Kinovea will try to detect an image sequence by looking for consecutive numbers in the name of other files of the same folder.
If an image sequence is detected, all the image files will automatically be loaded and the whole collection interpreted as a video.
Example of file names interpreted as a sequence:
image_001.png, image_002.png, image_003.png, etc.
test01.jpg, test02.jpg, test03.jpg, etc.
The numbers can be anywhere in the file name.
Aside from the numbers the rest of the name should be identical for all files.
The amount of leading zeroes in the numbering scheme should be consistent between all files of the sequence.
A gap or a change in the numbering system will be interpreted as the end of the sequence.
The detection is bidirectional: any file can be used to load the sequence, the resulting video will always start at the file with the lower number found.
When this option is checked and two videos are compared, changing the playback speed in one player will automatically change the playback speed in the other player to match it.
Note
The playback speed is matched according to the real time speed, taking into account differences in video file framerate and capture framerate.
If one video is in slow motion and the other is at normal speed, this option should still work as expected as long as the capture framerate of the slow motion video is correctly configured.
This option defines the default image aspect ratio configured any time a video file is opened. It is the same as manually configuring the menu Image ‣ Image format.
The following options are available:
Auto detection
Force 4:3
Force 16:9
The Auto detection option uses the image size and the pixel aspect ratio found in the video file metadata to calculate the image height.
The other options will change the height of the video to match a 4:3 or 16:9 aspect ratio.
This option forces the deinterlacing mechanism to be enabled for all opened files. It is the same as manually configuring the menu Image ‣ Deinterlace.
This option lets you point to a .KVA file containing video annotations that will be automatically loaded when any video is opened.
Other annotations files can still be loaded on top of the video by using the sidecar file method or through the menu File ‣ Load annotations. They will be merged with each others.
The cache memory is used to load the video content in system memory and speed up navigation.
When the active video section (working zone) fits in the cache memory it will be automatically loaded into this cache. If the video section does not fit in the cache the memory will not be consumed.
When using side by side comparison each playback screen can use at most half the memory amount configured.
In the case of multiple instances of Kinovea, each instance has its own cache memory.
This option controls the format of all time-related information displayed in the program 1. It is used in the timeline position and duration, in chronometers and clocks, and in exported files.
The following options are available:
Format
Example
Description
[h:][mm:]ss.xx[x]
1:10.48
Textual timecode.
Frame number
1762
Rank of the current frame.
Total milliseconds
70480
Integer number of milliseconds.
Total microseconds
1284
Integer number of microseconds.
Ten thousandth of an hour
904
Ten thousandths of an hour
Hundredth of a minute
542
Hundredths of a minute
[h:][mm:]ss.xx[x] + Frame number
1:10.48 (1762)
When using textual timecode if the real time framerate is higher than 100 fps, thousandths of seconds are displayed. Hours and minutes are only displayed when necessary.
Note
The time starts at the time origin. The time origin can be configured to be anywhere in the video.
Video locations that are before the time origin are displayed as negative numbers.
If the time origin is not manually defined, the time origin is automatically set to the start of the current video section.
The unit for speed is used in the trajectory tool and in the Linear kinematics window when setting the measurement display option to Speed, Horizontal velocity or Vertical velocity.
It is also used in the Angular kinematics window when using Tangential velocity.
The following options are available:
Unit
Symbol
Meters per second
m/s
Kilometers per hour
km/h
Feet per second
ft/s
Miles per hour
mph
Note
If no spatial calibration has been performed the speed unit will automatically be Pixels per second (px/s).
The unit for acceleration is used in the trajectory tool and in the Linear kinematics window when setting the measurement display option to Acceleration, Horizontal acceleration or Vertical acceleration.
It is also used in the Angular kinematics window when using Tangential acceleration, Centripetal acceleration or Resultant acceleration.
The following options are available:
Unit
Symbol
Meters per second squared
m/s²
Feet per second squared
ft/s²
Note
If no spatial calibration has been performed the acceleration unit will automatically be Pixels per second squared (px/s²).
The unit for angle is used in tools measuring angles and in the Angular kinematics window when setting the data source option to Angle or Total displacement.
This option defines the name and symbol for an additional length unit.
The built-in length units are: millimeters, centimeters, meters, inches, feet and yards.
This custom length unit will appear at the bottom of the length unit drop down in the spatial calibration dialogs.
The scale factor between pixels and this unit is defined during the calibration process in the same manner as for other length units.
Using the custom length unit to add micrometers to the list of built-in length units.
With the exception of the time axis in the kinematic analysis dialogs. In these dialogs the time is always displayed numerically, either in milliseconds or normalized.
This option controls the filtering of spatial coordinates for trajectories and tracked drawings.
Due to the digitization process the raw coordinates are noisy and the resulting quantities, especially derivatives like speed and acceleration, are less accurate than they could be.
Carefully filtering the coordinates remove a lot of this noise and provide more accurate measurements.
You can uncheck this option if you would rather export the raw coordinates and perform the filtering yourself.
Tip
For more in-depth information on the exact filtering approach and algorithms used, refer to the About page of the Linear kinematics window.
When this checkbox is ticked custom tools will display extra information about the name and relations of points, handles and segments.
This option can be used by tool authors to facilitate design.
The options on this tab control the default visibility of drawings. Drawings can be customized independently by bringing their context menu and selecting options in the Visibility sub menus.
In the most general case a drawing’s visibility over time follows the following pattern:
This option controls the opacity used during the opaque section.
A value of 100% means the drawing will not let the background show through.
A value less than 100% means the drawing will be somewhat transparent.
This option controls how long the drawing stays at its maximum opacity level before fading out. This section starts at the keyframe onto which the drawing was added.
The options on this page control the default parameters for trajectories and drawing tracking.
Trajectories can also be configured independently by bringing their context menu and going in their configuration window.
The tracking parameters of other drawings cannot be modified after creation.
The object window size defines the size of the patch of image around the tracked point that is being looked for in other frames.
This should be set to be as small as possible in order to avoid including the background in the tracked patch.
The search window size defines the area in which the point is looked for.
This should be large enough to compensate for the object change of position from one frame to the next.
However search window too large can lead to the tracking algorithm picking a different object in another part of the image, if this object looks similar to the tracked object.
Tip
Use the trajectory tool configuration dialog to visually figure out the appropriate size of the object and search windows.
When this option is checked the video are recorded without compressing the video frames to JPEG first.
When this option is unchecked the video frames are compressed to JPEG using high quality settings to retain fidelity.
This option can be used to minimize dropped frames, by trading off space for speed (due to the JPEG settings used, image quality shouldn’t really be much impacted by this option).
When using a fast storage medium it can be faster to store the large uncompressed images compared to the time it takes to compress them.
For image snapshots the compression depends on the file format.
Note
When configuring cameras to use MJPEG streams this option is ignored. The MJPEG streams are always saved as-is in their container format.
Note
When configuring cameras to use raw Bayer pattern images you must enable this option to be able to reconstruct the color information at playback time.
Warning
Not all video players can play back uncompressed files.
This option defines the frequency at which the camera images are updated in the capture screen.
While recording the computer resources are shared between displaying the camera stream and recording it to the storage medium.
The highest priority is always given to recording but lowering this value can help reduce the overall load on the computer and improve recording performance.
This option lets you point to a .KVA file containing annotations that will be automatically loaded when any camera stream is opened.
Other annotations files can still be loaded on top of the camera stream by using the menu File ‣ Load annotations. They will be merged with each others.
This option controls the amount of memory used to remember old frames, in the context of the delayed view of the camera feed.
By extension, this option defines the maximum delay configurable in the capture screen. The maximum delay is based on image size, image format and capture framerate.
When using two capture screens at the same time each screen uses half the memory amount configured.
In the case of multiple instances of Kinovea, each instance has its own delay buffer memory.
Warning
Unlike the cache memory in the playback screen, this amount of memory is always allocated and used as soon as a capture screen is opened.
The recording mode option defines how the recording sub-system interacts with the delay buffer to create the output video.
When the camera transmits a new frame it is always put in the delay buffer and the recording sub-system always takes frames from the delay buffer to create the output video.
When using this recording mode the recording is not performed on the fly.
Instead, at the end of the recording process, when clicking the stop recording button or when the maximum recording duration is reached, the camera feed is paused, the delay buffer is frozen, and the video file is created all at once.
The delay value is taken into account for creating the recording.
This mode offers the best recording performances and minimizes dropped frames, at the cost of a reduced maximum length for created videos and a temporary freezing of the camera feed.
Tip
The maximum length of recorded videos using this recording mode depends on the size of the delay buffer. This can be configured from the Memory preference page.
The options in this group let you alter the framerate written in the metadata of the output file.
This influences the amount of resources required to replay the file and the apparent speed of the action.
A camera might be capable of producing and transmitting 1000 frames per second but the computer will not be able to play the file back at that speed and the monitor won’t be able to refresh itself fast enough either.
To work around this problem it is usual to reduce the framerate of the output file to a more typical one. Recording devices normally apply this transformation automatically. This results in a video that appears to be in slow motion.
This option controls the framerate from which the output file is modified to use a lower one.
If the camera is configured to send images at a higher framerate than this value, the actual framerate stored in the file metadata will be the replacement framerate.
If the camera is configured to send images at a lower framerate than this value, no change will happen.
The options on this page let you configure the automated naming system for image snapshots of the camera stream.
The final path and file name is created by concatenating the Root, Sub directory and File values.
Each field can contain special macros referring to context variables that are automatically inserted in the final path.
If no context variable are used at all, the file naming system will prepare the next recording by automatically incrementing a counter and appending a number to the file name.
If the computed value result in the same name as an existing file the capture screen will prompt for overwrite confirmation.
To view the list of available context variables click the % button next to the Sub directory or File fields.
The following context variables are available:
Macro
Description
%year
The current year
%month
The current month as a number from 01 to 12.
%day
The current day of the month from 01 to 31.
%hour
The current hour from 00 to 23.
%minute
The current minute from 00 to 59.
%second
The current second from 00 to 59.
%date
The current date in the format “YYYYMMDD”.
%time
The current time in the format “HHMMSS”.
%datetime
The current date and time as “YYYYMMDD-HHMMSS”.
%camalias
The camera alias.
%camfps
The configured framerate for the camera.
%recvfps
The framerate really received from the camera.
%%
This is replaced by an empty string.
Anything that is not exactly part of a macro is copied verbatim to the output.
Some examples assuming the current date and time is October 20th, 1968 at 16:00:00 (4 PM):
If you want to use a completely static file name and bypass the automated counter increment for consecutive recordings, use the %% macro variable.
Be aware that this will require you to either enter the filename manually for every recording or overwrite an existing file.
When this option is checked Kinovea measures the volume level on the microphone and triggers the start of the recording when this volume goes above the configured threshold.
Note
The audio trigger mechanism can be disarmed for individual cameras from the capture screen controls.
This option lets you select which microphone is used to trigger recordings.
Tip
Ensure that Kinovea can access your microphone by opening Windows Sound settings, going to Microphone privacy settings and turning on Allow apps to access your microphone.
The trigger threshold defines the volume level required to trigger recordings.
You should see the black line moving laterally as the microphone picks up sounds. The vertical red line represents the trigger level.
The counter on the right is incremented each time the trigger is reached and reset when the threshold value is changed.
You can use this to get immediate feedback while figuring out the appropriate configuration.
This option defines the maximum duration for recordings.
Recordings started manually or by audio trigger will be stopped right after they reach this duration.
Setting the value to 0 disables the option and requires manually stopping the recording process.
This option is orthogonal to delayed recording.
For example if the camera is configured with a 2-second delay and the maximum duration is set to 5 seconds, the created video will last 5 seconds as configured:
the first 2 seconds are actions that happened before the recording trigger and the last 3 seconds are actions that happened after the recording trigger.
In combination with the audio trigger this option lets you record multiple sequences without manually interacting with the computer.
Note
This value is a lower bound, the final video might be slightly longer than configured due to internal processing and alignment with frame boundaries.
This option lets you set up a program that will be run at the end of every recording. This can be used to automatically copy the file to a different location, perform compression or apply post-processing.
The command line can contain special macros referring to context variables that are automatically inserted in the final command.
This option bypasses the overwrite confirmation dialog when the recording about to start points to an existing file. If the option is checked the existing file is irremediably deleted and overwritten by the new one.
This option can be used if you are limited in space and do not need to save all sequences.
In this scenario you can continuously record to a single file and manually copy it to a different location only when you really want to keep it.
This page lets you view and change the keyboard shortcuts for Kinovea internal commands.
The shortcuts are grouped in categories based on which part of the user interface the shortcut is active on.
The Commands list view displays each command and the corresponding keyboard shortcut.
To modify a keyboard shortcut select the corresponding category and command in the command list.
The existing keyboard shortcut will be displayed in the text box at the bottom left of the page.
Click in the text box and perform the sequence of keys that you wish to use as a replacement shortcut. Click the Apply button to commit the new shortcut to the preferences.
Click the Clear button to remove the existing keyboard shortcut for the active command.
Click the Default button to restore this particular command to its default keyboard shortcut.
To open a video in Kinovea do any of the following:
Drag and drop the file from Windows explorer onto the Kinovea window.
Use the menu File ‣ Open and select the file.
Use the menu File ‣ Recent and select a recently opened file.
Use the explorer panel to locate and double click on a file in the file explorer, the shortcut explorer, or the recently captured files, or drag and drop it into an opened playback screen.
Double click on a thumbnail in the thumbnail explorer.
Kinovea comes equipped with video decoding libraries to supports many file types and does not need nor use any third party codecs.
It supports the following:
Most video file formats, including MP4, MPEG, MOV, AVI, MKV, M4V, FLV, WMV, among others.
Simple image files such as PNG, JPEG and BMP.
Animated GIFs.
SVG vector images.
Image sequences.
Image sequences are collections of individual image files that have the same size and are named following a numbering pattern. They are loaded as a video.
The drawings toolbar contains buttons to create new key images, select the active tool and open the color profile.
The toolbar contains more tools than those immediately visible.
Buttons that host extra tools have a little black triangle in the top-left corner.
The extra tools can be accessed by right-clicking or long-clicking the primary button.
The working zone defines the segment of the video that the player is working with.
The play head loops within the working zone.
Marks the current time as the time origin. This makes time values relative to this moment.
Locks the working zone start and end point to avoid changing them by mistake.
Sets the starting point of the working zone within the video.
Sets the ending point of the working zone within the video.
Resets the working zone to the whole video.
You can also update the working zone boundaries by directly manipulating the blue end points.
Tip
If the amount of data fits in the cache memory, the working zone will be loaded in memory.
This improves playback performances and enables the Video ‣ Overview and Video ‣ Reverse menus.
The cache memory can be configured under Options ‣ Preferences ‣ Playback ‣ Memory.
The speed slider goes from 0 to twice the nominal speed of the video.
The displayed speed value takes into account the slow motion factor configured such that the speed is shown as a percentage of the real world action speed.
For example if a video is filmed at 240 fps and saved into a file as 24 fps, the video will normally play back at 10% of the real world speed.
In this case the speed control will go from 0 to 20% with a mid-point at 10%.
Warning
If the video cannot be played back at its nominal speed for performance reasons the playback speed value will automatically be lowered down.
Performance for playing back depends on the displayed image size, the frame rate and the file format.
To change the aspect ratio of the image use the menus under Image ‣ Image format.
Some devices use non-rectangular pixels and don’t fill the corresponding pixel aspect ratio value in the file metadata.
In these case it might be necessary to force the aspect ratio to a known value.
To deinterlace the video use the menu Image ‣ Deinterlace.
Some capture devices store video using an interlaced format.
Interlaced videos store half images at twice the frame rate, alternating odd and even rows.
This causes a combing artifact when the filmed motion is fast as objects or subjects move during the half frame interval.
The deinterlacing algorithm reconstructs full images by combining rows from adjacent frames.
When a video is captured with a high speed camera (or using the high speed or slow motion function of a smartphone),
the generated video file often has a frame rate different from the capture frame rate.
For example a video filmed at 1000 fps may be saved to a file with a more typical playback frame rate of 25 fps.
In this case the video will play back slower than real time, which is expected,
but the time-related information and calculations would be erroneous if they were based on the playback framerate.
To work with this type of video it is important to configure (or “calibrate”) the time scale.
This is done by going to Video ‣ Configure video timing and filling the capture frame rate.
Changing this option does not change the nominal speed at which the video is played back.
In other words setting the speed control at its mid-point will still play back the video at the same slow motion rate as before.
Instead, this option changes the time coordinates of the images.
In some cases a video is saved with a frame rate which is just plain wrong, or Kinovea cannot read it.
For example a USB camera might claim that it is capturing video at 25 fps but the video stream is actually transfered at 15 fps.
In this case the video will play back at the wrong speed and the time calibration will be wrong.
If you know the real playback frame rate at which the video is supposed to be played back, you may enter it in Video ‣ Configure video timing.
Changing this option does change the nominal speed at which the video is played back.
Two videos can be synchronized by setting their time origin to a common event visible on both videos.
When the videos are synchronized they will pass through their time origin at the same time.
To set the time origin in a video move to that point in the video
and click the Mark current time as time origin button or right click the background of the video and choose the Mark current time as time origin menu.
Alternatively you can move each video to the correct point independently and use the Synchronize videos on the current frames button in the joint controls area.
During joint-playback, the synchronization mechanism means one video may start and/or end playing before the other.
To perform joint frame-by-frame navigation, move the cursor in the joint timeline or use the joint controls buttons.
By default the speed controls in each playback screen are linked with each other:
lowering the playback speed in one video lowers it in the other.
The speed controls are independent of the frame rate of the video files so
this should apply a similar slow motion factor to both videos and keep the comparison meaningful.
Tip
If one of the video was captured with a high speed camera and has a different capture frame rate,
this frame rate should be configured for this video via menu Video ‣ Configure video timing.
Once this configuration is done both controls will still be coherent with each other.
If you are confident that you do not want the speed sliders to be linked together you may change the option in Options ‣ Preferences ‣ Playback ‣ General ‣ Link speed sliders when comparing videos.
Superposition paints each video on top of the other at 50% opacity.
This is a basic mechanism to compare motion in-situ if the videos were filmed in the same environment with a static camera.
The overview mode is a special video mode used to display multiple images of the video at the same time.
This type of display is also called a “kinogram”.
It can be used to create a single-image summary of the whole video sequence.
To enter the overview mode, use the menu Video ‣ Overview.
Tip
The overview mode is only enabled when the entire working zone is cached in memory.
To change the amount of memory used for the cache go to Options ‣ Preferences ‣ Playback ‣ Memory.
When the Overview mode is active the main image viewport displays a collection of frames taken from the video at regular intervals.
The playback controls are disabled.
To control the number of images displayed use the mouse wheel. The display can be set between 2x2 and 10x10 images.
To quit the Overview mode, use the close button in the top-right corner of the playback screen.
The reverse mode is a video mode where the video frames are re-ordered backwards.
To enter the reverse mode, use the menu Video ‣ Reverse.
Tip
The reverse mode is only enabled when the entire working zone is cached in memory.
To change the amount of memory used for the cache go to Options ‣ Preferences ‣ Playback ‣ Memory.
To quit the Reverse mode, use the close button in the top-right corner of the playback screen.
Annotation tools are used to add drawings and text to images of the video.
Some tools can also be used to measure distances or display coordinates.
Drawings are attached to a specific key image.
Deleting the key image deletes all the drawings attached to it.
Drawings are vector-based: they can be modified after they have been added to the video.
Drawings have a context menu that can be used to access style options, visibility configuration, tool-specific functions, tracking management, copy and paste, support and deletion.
While a tool is active, right clicking the viewport opens the color profile at the page of the active tool.
There are more tools than those immediately visible.
Buttons with a small arrow in the top-left corner contain other tools that can be accessed by doing a right click or a long click (click and hold) on the button.
A flying menu opens with the extra tools available.
The hand tool is used to manipulate drawings or pan the whole image.
To stop using a particular tool and come back to the hand tool use the Escape key or click the hand tool button.
Tip
You can also use the middle mouse button to directly manipulate drawings without changing back to the hand tool.
Key image
Adds a new key image.
Commentary
Opens the commentary dialog to attach a paragraph of text to the key image using the rich-text editor.
Color profile
Opens the color profile dialog to change the default style of drawings.
The magnifier function creates a picture-in-picture effect with an enlarged version of the current image displayed within the original image.
This is a display mode rather than a normal drawing tool, it is not saved in the KVA file.
Drawings have styling properties like color, size of text or type of line. The exact set of properties depends on the tool.
New drawings use the default style configured for this type of drawing.
The default style can be changed in two ways:
Open the color profile by clicking the color profile button or right clicking the viewport while the corresponding tool is active.
Right click an existing drawing and use the menu Set style as default.
To change the style of a drawing after it is created right click it and select the Configuration menu.
This dialog also lets you change the name identifying the drawing.
Certain tools that are presented as separate entries in the tool bar are actually style variants of each other.
For example the presence of arrows at the end of lines is merely a style option.
This means it is possible to convert a line into an arrow by changing the arrow style option.
Drawings have visibility properties that controls their opacity throughout the video.
New drawings start with the default opacity options set in Options ‣ Preferences ‣ Drawings ‣ Opacity.
Each drawing can then be configured to use different visibility options.
In general terms drawings have a fade-in ramp, an opaque section and a fade-out ramp.
The default options make drawings fully visible on their key image and fade in and out of the neighboring frames.
When drawings are tracked they stay opaque during the section of video where they are tracked.
With this option the drawing uses a custom configuration defined through the Configure custom fading dialog.
Maximum opacity (%)
This option controls the opacity used during the opaque section.
A value of 100 % means the drawing will not let the background show through.
A value less than 100 % means the drawing will be somewhat transparent.
Opaque duration (frames)
This option controls how long the drawing stays at its maximum opacity level before fading out. This section starts at the keyframe onto which the drawing was added.
Fading duration (frames)
This option controls the duration of the ramps before and after the maximum opacity until the drawing becomes completely invisible.
Time information in Kinovea is relative to a specific point in the video.
By default, this is the start of the video or working zone.
To set the time origin to an arbitrary point in the video navigate to this point and use the Mark current time as time origin button or context menu.
All displayed times will be relative to this origin, using negative time before the event and positive time after it.
Setting the time origin to another point of the video can be useful to quickly get timing information in relation to a specific event,
for example, a golf ball impact, a pitcher’s release point, a long jumper’s take-off, a starting gun trigger.
In addition to opening image files as if they were videos, you may import images and vector drawings into videos to create a picture in picture effect.
To import an image file in the current video use the menu Tools ‣ Observational references ‣ Import image….
To import an image that you have copied in the clipboard in another application,
right click the video background and use the menu Paste image from clipboard.
To transfer an image from one screen to the other, use the menu Copy image to clipboard and then paste it in the other video.
Annotations are saved in KVA files. These are XML files with a .kva extension.
KVA files store data for key images, comments, drawings, trajectories, chronometers, time origin, tracked values, and the coordinate system calibration.
To save the current annotations use the menu File ‣ Save or the shortcut CTRL + S.
Tip
Save the annotation file with the same name as the video to have it automatically loaded the next time you open the video in Kinovea.
This is known as a “sidecar” file, for example video.mp4 and video.kva. This is the default option when saving.
To load an annotation file into an opened video use the menu File ‣ Load annotations.
The imported annotations are merged together with the existing annotations you might have already added to the video.
If the imported annotations were created on a different video, a conversion step may take place to adapt the drawings dimensions and time codes to the new image size and frame rate.
Loading external annotations can be used to import calibration settings between videos filmed during the same session without having to perform the calibration for every video.
To automatically import a specific annotation file into every video,
use Options ‣ Preferences ‣ Playback ‣ General ‣ Default annotation file…, and point it to the KVA file you want to be loaded.
The result of running the OpenPose program on a video is a set of JSON files containing data for one or more human postures.
OpenPose uses a 25-point body model.
This is not meant to be used for measurements but for general posture assessment.
The workflow to import OpenPose data into Kinovea is the following:
Run the OpenPose software on the video, using the write_json option.
This creates a set of .json files in the output directory.
Each file contains descriptors for the detected poses.
In order to use Kinovea to make measurements on the video, it is necessary to calibrate the transformation of pixels in the image into real world units.
Kinovea supports two calibration mechanisms: calibration by line and calibration by plane.
All measurements in Kinovea must sit on a 2D plane.
If the motion you want to study is on a plane parallel to the image plane (orthogonal to the camera optical axis), you may use calibration by line.
Otherwise, if you are measuring points on the ground, for example, you should use the calibration by plane.
If the motion is happening in arbitrary 3D space you cannot measure it in Kinovea.
Line calibration is possible when the motion is sitting on a 2D plane parallel to the camera plane.
To perform line calibration follow these steps:
Have an object of known length visible in the video.
Add a line object and place it on top of the object of known length.
Right click the line and select the Calibrate menu.
Enter the real-world length of the object.
Note
Modifying the calibration line after calibration already took place will update the calibration to use the new pixel length of the line
and keep the real-world length as configured.
Plane calibration is possible when the motion is sitting on an arbitrary 2D plane visible in the video.
To perform plane calibration follow these steps:
Have a rectangle of known dimensions visible in the video.
Add a perspective grid object and move its corners to match the rectangle.
Right click a corner and select the Calibrate menu.
Enter the real-world width and height of the rectangle.
Note
Modifying the calibration plane after calibration already took place will update the calibration to use the new pixel dimensions of the plane
while keeping the real-world dimension as configured.
Mount the camera on a tripod and avoid camera motion
The camera must remain stationary for the images to provide a stable frame of reference.
If the camera is moving relative to the scene, the plane of motion will change over time
and the calibration from one video frame cannot be used on other frames.
Tip
If you do not control the camera and it is moving, you can try to track the calibration object itself.
To minimize perspective distortion. place the camera as far as possible from the scene and zoom in.
This will reduce errors due to points moving in and out of the plane of motion.
To align the axes of the coordinate system with the real world use a plumb line or other object of known direction as the calibration object.
If the real world vertical line is not parallel to the image side you can set the calibration line to define the vertical axis.
Avoid measuring objects outside the plane of motion
Everything that is measured must sit on the calibrated plane of motion.
Coordinates and measurements using points physically outside the plane of motion will be inaccurate.
The first placed point of the line becomes the default origin of the coordinate system.
The line direction becomes an axis of the coordinate system based on the option selected in Coordinate system alignment.
The following alignment options are available:
The line defines the horizontal axis
The line defines the vertical axis
Aligned to image axes
If Align to image axes is selected the orientation of the line is ignored.
The bottom-left corner of the grid becomes the origin of the coordinate system.
The coordinate system axes are aligned with the calibration grid object.
When using cameras with significant distortion is it important to calibrate it in Kinovea before making measurements.
The lens distortion calibration rectifies the coordinates before they are passed to the spatial calibration.
Barrel distortion exposed by a GoPro action camera and the default coordinate system after lens distortion calibration.
Lens calibration is compatible with both line calibration and plane calibration.
The coordinate system, lines and grid objects are drawn distorted to follow the distortion.
The images themselves are not rectified.
The description of the lens distortion is done in the lens calibration dialog in the menu Tools ‣ Camera calibration.
There are multiple approaches to finding coefficients that best match a particular camera lens and camera mode, they are described below.
All the approaches involve filming a highly structured pattern in order to facilitate the distortion estimation, whether it is done visually or programmatically.
This pattern can be a chess board, a brick wall, or any structured image displayed on a flat screen.
The following parameters are used during calibration:
In this approach the distortion is estimated visually by directly changing the distortion coefficients and trying to align the grid on the image.
It is the least accurate approach but can still provide decent results.
Follow these steps:
Film the pattern straight on.
Load the video and open the lens calibration dialog.
Tweak k1 and k2 distortion coefficients to match the distorted grid image.
k1 is the distortion of the first order and should be used as a starting point to get the grid to align as best as possible in the central region of the image.
k2 can be used to counteract the main distortion at the periphery of the image.
You can play with the cx, cy, p1, p2 parameters as well to adjust the grid to the image.
In this approach the distortion is estimated programmatically by Kinovea from a set of distortion grid objects manually placed over images of the video.
Follow these steps:
Film a chess board or structured pattern from various angles.
Select some images (for example five) and add a Distortion grid object.
Map each point of each grid onto corners on the filmed pattern.
Open the lens calibration dialog and click the Calibrate camera button in the lower left.
This will compute and fill the distortion parameters.
The distortion parameters are saved in the KVA file but if you want to re-use the same parameters on a different video you can export them to a separate file.
Use the menus File ‣ Save and File ‣ Open in the lens calibration dialog.
Note
Any change of camera model, lens, or configuration options involving image resolution or zoom requires a new calibration procedure.
The time unit can be changed from the menu Options ‣ Time or from Options ‣ Preferences ‣ Playback ‣ Units ‣ Time.
The following options are available:
Format
Example
Description
[h:][mm:]ss.xx[x]
1:10.48
Textual timecode.
Frame number
1762
Rank of the current frame.
Total milliseconds
70480 ms
Integer number of milliseconds.
Total microseconds
1284 µs
Integer number of microseconds.
Ten thousandth of an hour
904
Ten thousandths of an hour
Hundredth of a minute
542
Hundredths of a minute
[h:][mm:]ss.xx[x] + Frame number
1:10.48 (1762)
Note
The configured time unit is used for the time position and duration, the clock and stopwatch tools, and when exporting to spreadsheets.
The kinematics dialog always uses total milliseconds as the time unit.
Times displayed are relative to the time origin defined for the video.
By default the time origin is at the begining of the video but you can change it
in order to display times relative to a particular event of the video; such as a ball impact,
a jump take-off, a release point, a race start, etc.
Video positions before the time origin have negative times.
The time origin can be changed manually by using the time origin button or by right clicking the image and selecting Mark current time as time origin.
The current video frame becomes the time origin.
To annotate the video with the current video time use the clock tool .
By default a clock object simply displays the current video time using the global time origin.
Each clock tool may also have its own custom time origin independent from the global one.
To define a custom time origin Right click the clock tool and choose Mark current time as time origin for this clock.
The clock object can be identified by name by using the Show label menu and changing the object name in its configuration dialog.
Videos filmed with high speed cameras or in high speed mode can have a video frame rate different from the capture frame rate.
For example a video filmed at 1000 fps is typically saved with a “standard” playback rate of 30 fps.
This makes the final video appear in slow motion even when the speed slider is set to 1x.
In this case it is important to tell Kinovea about the original capture frame rate for times to be correct.
This impacts time positions and time intervals.
Open the video timing dialog from menu Video ‣ Configure video timing…
and in the top part of the dialog, enter the capture frame rate.
Note
When capturing videos with Kinovea this option is automatically set based on information found in the KVA file saved next to the recording.
To measure an angle, add an angle object and position its end points.
By default the angle runs counter-clockwise, starting from the dashed leg.
The context menu options let you switch between signed or unsigned angle, change from counter-clockwise to clockwise, and switch to supplementary angle.
Note
Always keep in mind that it is not possible to measure angles in arbitrary space from a 2D image.
Angles can only be measured when the three points lie on a known 2D plane.
This must be either the image plane or the plane calibrated using plane calibration.
The goniometer tool lets you measure the extension or flexion of a body segment relative to a referenced anatomical angle or neutral position.
A physical goniometer combines two arms and a protractor.
One arm is called the stationary arm and the other is called the movable arm.
The protractor part contains multiple graduated rings that allow the physician to pick a reference axis when reading the angle.
The stationary arm is aligned with the reference segment to materialize the neutral position and the movable arm is aligned with the segment for which we are measuring the range of motion.
Using a goniometer instead of a simple protractor makes it easier to align the protractor with the body segments
and helps standardize the way the measurements are made for the range of motion of specific joints.
In Kinovea, the goniometer tool has three branches:
The stationary arm is the thick plain arm.
The movable arm is the one with the arrow at the end.
The dashed line is used to define the protractor reference axis in relation to the stationary arm.
This branch rotates by 45° increments relatively to the stationary arm.
Plantar flexion. The reference axis is set perpendicular to the leg.
This tool is conceptually similar to a real goniometer with 8 protractor rings but the reading is simplified by showing only one measurement at a time.
The angle-to-horizontal and angle-to-vertical tools let you measure angles relatively to the horizontal or vertical axes of the image.
The dashed line represent the reference axis.
To track the trajectory of a single point or body joint visible on the image follow these steps:
Right click the object to track and choose Track path.
Move the video forward using the Next frame button, the mouse wheel or the Play button.
Adjust the point position as necessary during the path creation.
To finish tracking, right click inside the tracking window and choose End path edition.
Tracking is a semi-automatic process. A candidate point location is computed automatically but can be adjusted manually at any time.
While tracking is in progress two rectangles will be visible around the object being tracked.
The inner rectangle is the object window, the outter rectangle is the search window.
When the automated tracking fails, correct the point location by dragging the search window. Drag it until the cross at the center of the tracking tool is at the correct location.
When tracking resumes, it will use this new point as reference.
Tip
When the trajectory object is not in path edition mode the trajectory is interactive: clicking on any point of the trajectory will move the video to the corresponding frame.
This option enables the display of a measurement label following the current point on the trajectory.
The options are kinematics quantities computed from the trajectory points.
This option is also directly available via the trajectory context menu under Display measure.
The following options are available:
None
Name
Position
Total distance
Total horizontal displacement
Total vertical displacement
Speed
Horizontal velocity
Vertical velocity
Acceleration
Horizontal acceleration
Vertical acceleration
Note
To display kinematics measurements in real world units you must first calibrate the coordinate space.
If the video is natively in slow motion you must also calibrate the time scale.
This option uses the points of the trajectory to compute the best-fit circle of the trajectory.
This is a circle that minimizes the error of each point relatively to a virtual perfect circle.
This can be used to visualize the pseudo-center of a rotary motion.
The size of the object and search windows can be modified by dragging the corners of the windows in the preview panel or by changing the values.
The object window should be as small as possible around the point of interest to avoid tracking interferences.
The search window should be large enough to contain the position of the point in the next frame,
but small enough to avoid interferences between multiple markers.
When the section of time covered by the trajectory contains key images they are displayed as small labels attached to the trajectory point at that time.
Unlike the trajectory tool the sizes of the search and object windows cannot be directly modified from the object configuration dialog.
In order to change the default sizes for these windows go to Options ‣ Preferences ‣ Drawings ‣ Tracking.
Tip
Use the trajectory tool configuration dialog to visually figure out the appropriate size of the object and search window, then enter these parameters in the preferences.
The calibration mechanism uses a line object or a plane object to define a coordinate system and transform pixel coordinates into real world coordinates.
If the object defining the calibration is itself tracked, the calibration will be updated at every frame.
This can be used to compensate a moving camera.
It is also possible to track the coordinate system origin while keeping the calibration static.
This can be used to obtain coordinates of a point relatively to a moving reference.
Note
Tracking the calibration object and tracking the coordinate system are not compatible with each other. The tracked calibration object redefines the coordinate system.
To measure linear or angular speed and other kinematics quantities follow these steps:
Establish a line or plane calibration.
Compensate for lens distortion.
Track a point trajectory or track an object containing an angle.
The measured data can be displayed in two ways:
Directly on the object.
In a dedicated kinematics diagram.
To export the data, use the export options in the kinematics diagrams.
To display the measurement as a label attached to the object, right click the object and choose an option under the Display measure menu.
Each measurable object has specific options based on the quantities it can measure.
Note
The data displayed directly on objects use raw coordinates whereas the kinematics diagram uses the (optional) filtering mechanism.
The data can be exported to an image or to tabular data.
For tabular data the points are sorted by time; the first column is the time in milliseconds, the second and third columns are the X and Y positions.
The data source list contains a list of all tracked points.
Data from individual points can be hidden from the line chart by unchecking the checkbox in front of the point name.
Points in drawings with multiple points are listed by the name of their drawing and a point identifier.
The Data type drop down sets the kinematic quantity used in the line chart.
The following options are available:
Horizontal position
Vertical position
Total distance
Total horizontal displacement
Total vertical displacement
Speed
Horizontal velocity
Vertical velocity
Acceleration
Horizontal acceleration
Vertical acceleration
Total horizontal displacement and total vertical displacement computes the resultant displacement between the first point and the current point of the trajectory.
This is similar to the horizontal and vertical position but makes it relative instead of absolute.
Note
Be mindful that acceleration values are very sensitive to noise as they are the second derivative of the digitized position.
The data can be exported to an image or to tabular data.
For tabular data the points are sorted by time; the first column is the time in milliseconds, the other columns are values from the time series.
The data source list contains a list of all tracked angles.
Data from individual angles can be hidden from the line chart by unchecking the checkbox in front of the point name.
The data can be exported to an image or to tabular data.
For tabular data the points are sorted by time, the first column is the time in milliseconds, the other columns are values from the time series.
The angle-angle diagram is a qualitative plot showing the dynamics of a movement pattern over multiple trials.
This type of diagram correlates two angular measurements over time to highlight their interaction.
For example, this can be used to study the coordination of the knee and hip during flexion or extension.
The data can be exported to an image or to tabular data.
For tabular data, the output has only two columns, correlating the first angle value with the second value at the same point in time.
Due to the digitization process the raw coordinates are noisy and the resulting quantities, especially derivatives like speed and acceleration, are less accurate than they could be.
Carefully filtering the coordinates remove a lot of this noise and provide more accurate measurements.
Data shown in the kinematics diagrams is computed using filtered coordinates.
This filtering can be disabled under Preferences ‣ Drawings ‣ General ‣ Enable coordinates filtering.
The coordinates are passed through a low pass filter to remove noise.
The filter does two passes of a second-order Butterworth filter.
The two passes (one forward, one backward) are used to reset the phase shift 1.
To initialize the filter, the trajectory is extrapolated for 10 data points on each side using reflected values around the end points.
The extrapolated points are then removed from the filtered results 2.
The filter is tested on the data at various cutoff frequencies between 0.5Hz and the Nyquist frequency.
The best cutoff frequency is computed by estimating the autocorrelation of residuals and finding the frequency yielding the residuals that are the least autocorrelated.
The filtered data set corresponding to this cutoff frequency is kept as the final result 3.
The autocorrelation of residuals is estimated using the Durbin-Watson statistic.
For trajectories, the cutoff frequency can be visualized in the About tab of the diagram dialog.
The calculated cutoff frequency depends on the data and is different for each trajectory object.
Challis J. (1999). A procedure for the automatic determination of filter cutoff frequency for the processing of biomechanical data., Journal of Applied Biomechanics, Volume 15, Issue 3.
The camera viewport is the main area where the camera image is visible.
The image itself can be moved around by dragging with the mouse and resized using the manipulators at the corners of the image or by using CTRL + mouse scroll.
Drawings on the capture screen can go outside the image area.
If the image stays black, there might be a problem with the available USB bandwidth or power, or the exposure duration might be too short.
If nothing is visible at all, not even the black image rectangle, the camera did not connect correctly; for example, the camera might be in use in another application at the same time.
The infobar contains information about the connected camera and streaming performances.
The first part of the infobar displays the alias of the camera and the current configuration for image size, frame rate and image format.
Clicking in this area of the infobar will bring up the camera configuration dialog.
The frame rate indicated is the one configured, the actual frame rate sent by the camera might be different for various reasons like low light levels or hardware limitations.
The second part of the infobar displays the following live statistics:
This is the frequency at which Kinovea is receiving images from the camera. The value is in frames per second.
Many cameras will reduce their frame rate based on various external factors.
For example, when a camera uses auto-exposure and the exposure duration computed by the device is incompatible with the selected frame rate.
This is the amount of data that passes through Kinovea as it processes the stream. The value is in megabytes per second, with the convention of one megabyte as 1024 kilobytes.
This value is related to the the image size, frame rate and format, possible on-camera image compression, link bandwidth, and possible post-processing done at the driver level.
You can use this value to estimate the necessary speed for your storage medium to write the uncompressed stream.
In the case of a non-compressed stream in RGB24 format, the value is calculated as follows:
This value describes how much Kinovea is struggling to keep up with the camera framerate.
It is computed as the time taken to process one frame divided by the interval between frames.
When this value is near 100% it means it takes Kinovea the same amount of time to process one frame as the time budget it has for that frame, if it goes over 100% dropped frames may occur.
The toolbar contains drawing tools usable on the capture screen. Some tools available in the playback screen are not available in the capture screen.
Some buttons may give access to multiple tools. To access the other tools, right click the button or perform a long press on the button.
The style profile dialog is not currently accessible in the capture screen, in order to change the default style of a tool you need to open a playback screen and change it from there.
The capture controls area contains the following buttons:
Configure camera
Displays the camera configuration dialog to change options like image size or frame rate.
The available options depend on the specific camera brand and model.
Pause camera
Pauses or restarts the camera stream. This disconnects the camera.
When the camera is disconnected, it is possible to review the last few seconds of action seen by the camera by adjusting the delay.
Disarm capture trigger
Disarms or rearms the audio capture trigger. When the audio trigger is disarmed, audio levels will not be monitored and capture will not be automatically started.
The microphone and audio level threshold can be configured from Options ‣ Preferences ‣ Capture ‣ Automation.
Save image
Saves the image currently displayed to an image file based on the configured file name and saving directory.
The saving directory can be configured from Options ‣ Preferences ‣ Capture ‣ Image naming.
Start recording video
Starts or stops recording the video. The video is recorded based on the compression options, recording mode, and naming options found under Options ‣ Preferences ‣ Capture.
The delay controls let you adjust the amount of delay, in seconds, of the displayed camera stream with regards to the real time action.
The maximum amount of delay depends on the camera configuration — hardware compression, image format, image size, frame rate — and the memory allocated in the delay cache under Options ‣ Preferences ‣ Capture ‣ Memory.
These fields define the names of the next files that will be saved when exporting an image or capturing a video.
They are automatically updated after each recording but can also be modified manually.
The file names can use macros like the current date or the name of the camera.
The list of available macros and configuration options can be found under Options ‣ Preferences ‣ Capture ‣ Image naming and Options ‣ Preferences ‣ Capture ‣ Video naming.
Clicking on the folder buttons will open the main preferences dialog on the relevant page.
Kinovea supports devices with a DirectShow driver.
These devices include USB video class (UVC) webcams, Mini DV camcorders, and analog video converters.
The avalaible stream formats depend on the brand and model of the camera.
The typical stream formats include:
RGB, RGB24, RGB32: the images are not compressed.
YUV, YCbCr, YUY2, I420: the images are not compressed.
MJPEG: the images are compressed on the camera.
Using the MJPEG stream format can lower the bandwidth requirements and improve framerate.
Note
Kinovea native storage format for compressed videos is MJPEG. When using this stream format, the videos are saved as-is without any extra decompression or compression steps.
This value is related to the amount of time the sensor is exposed.
Changing the exposure duration lets you find a tradeoff between motion blur and light requirements.
Lowering the exposure duration reduces motion blur and increases the amount of light required to capture the scene.
For most cameras brands, the unit of this value is not known and it is exposed as an arbitrary number.
For some camera brands the value is shown in milliseconds if a special property is exposed by the driver or the values where derived manually.
This value is a limiting factor for the framerate. If this value is too high the framerate is lowered automatically by the camera.
This is the amount of amplification of the signal captured on the sensor.
Increasing this value increases the apparent brightness but can introduce noise in the image.
In order to add a network or IP camera in Kinovea use the Manual connection button on the camera tab of the explorer panel.
This dialog brings the configuration options described below. The same options are available later by clicking on the Configure camera button in the capture screen.
The values for the options depend on the particular brand of network camera and your own network configuration. The most important setting is the Final URL, this is the URL used by Kinovea to connect to the camera stream.
Clicking the Test button will try to connect to the camera and report success or failure.
Smartphones can be used as network cameras over the WiFi network by using a dedicated application, provided it supports streaming in MJPEG.
For example “IP Webcam” by Pavel Khlebovich can be used on Android devices.
Consult the application to find the URL of the server and the IP address of the smartphone.
The machine vision cameras are supported via plugins that are distributed separately from Kinovea.
Each plugin must be installed under the application data folder, inside the Plugins\Camera sub-folder.
The runtime for the specific camera brand, provided by the manufacturer, must also be installed separately.
Consult the section for each brand below to check if any extra customization is needed during the installation of the vendor’s runtime to make it work with Kinovea.
This section describes the common options for the configuration of machine vision cameras.
Settings or installation information specific to each camera vendor are described after the section Resulting Framerate.
When the selected stream format is a raw Bayer format, this option defines which reconstruction method, if any, is applied to the raw sensor data. The following options are available:
Raw: No reconstruction is performed. Kinovea receives the images as-is.
Mono: Monochromatic images are rebuilt by the camera vendor runtime before passing the images to Kinovea.
Color: Color images are rebuilt by the camera vendor runtime before passing the images to Kinovea.
Note
When using a raw stream format and the video is recorded without compression, the raw sensor data is saved to the video file.
It is then possible to rebuild the color at playback time by choosing the appropriate option under the menu Image ‣ Demosaicing.
This approach can be interesting to limit the bandwidth required to transfer the camera stream and save it to storage.
The target framerate. Whether this framerate is actually reached or not depends on the image format, size, exposure and the camera hardware.
If the framerate cannot be sustained, the Resulting framerate value will be displayed in red.
If the Auto checkbox is checked, the camera will ignore the value and always send the maximum framerate possible based on the rest of the configuration and the camera hardware.
If the Auto checkbox is not checked, the camera will use at most the configured value, if it is possible for the hardware to do so.
The manual configuration can be interesting if you want to use a specific framerate that is less than the maximum possible.
Note
After changing the image size or stream format you must click on Reconnect for the maximum framerate information to be updated.
This is the amount of time the sensor is exposed, in microseconds.
Changing the exposure duration lets you find a tradeoff between motion blur and light requirements.
Lowering this value reduces motion blur but increase the amount of light required to capture the scene.
This value is a limiting factor for the framerate.
For example a value of 20 milliseconds implies that there cannot be more than 50 images per second captured.
This is the amount of amplification of the signal captured on the sensor.
Increasing this value increases the apparent brightness but can introduce noise in the image.
When installing Basler’s Pylon runtime software, it is necessary to use the Custom option in the installer, expand the pylon Runtime node, and select pylon C .NET Runtime option.
If you have already installed the software you can re-run the installer and choose Modify the current installation to access this option.
In order to use options that are not supported in Kinovea, use IDS’ uEye Cockpit.
Modify the camera configuration in uEye Cockpit and do File ‣ Save parameters to file.
Then in Kinovea, use the Import parameters button on the camera configuration dialog and point to the file you just saved.
In order to unlink the configuration file with Kinovea, right click on the camera thumbnail in the main explorer view and use the menu Forget custom settings.
The camera simulator is a virtual camera in Kinovea.
This camera can be used to evaluate the expected performances of real hardware on a particular computer.
In order to add a camera simulator use the Manual connection button on the camera tab of the explorer panel.
In the manual connection dialog, use the Camera type drop down and select Camera simulator.
When using the JPEG format, a set of images is compressed in advance and images are rotated to the output.
This is to avoid any computer slow downs due to compression when using the simulator to evaluate if a particular camera configuration would be sustainable; as the JPEG compression would be done in the camera.
When using the RGB24 format, the current time and image sequence number is stamped on the image.
Tip
You can use the sequential numbers on the images in the recorded video to verify if any frames were dropped during the recording process.
The live delay function lets you delay the display of the live stream.
This function is very useful for self-coaching: set the delay to be approximately the total time of the exercise plus the time necessary to come back to the computer.
The camera and Kinovea can then be left unattended.
By the time you complete your exercise and come back to the computer, you should see a replay of the action.
The same approach can be used with a group of students or athletes, forming an uninterrupted queue of intertwined feedback loops.
To delay the display of the camera stream use the delay slider or the delay input box.
The delay amount is in seconds.
The maximum amount of delay depends on the camera configuration — hardware compression, image format, image size, frame rate — and the memory allocated in the delay cache under Options ‣ Preferences ‣ Capture ‣ Memory.
If you wish to use a delay that is larger than the maximum that can be set using the slider, increase the size of the memory buffer under Options ‣ Preferences ‣ Capture ‣ Memory.
Note however that this buffer is always allocated, even if you do not use the delay function.
Note
When using two capture screens at the same time the memory buffer is shared between the screens.
It is possible to record video while delay is active. The option selected for the recording mode impacts whether the delay is taken into account for the recording or not.
Recording mode
Delay
Camera
No
Delayed
Yes
Retroactive
Yes
Combining delay and recording can be used to record actions happening before the moment the record button is hit or triggered.
When recording with delay, the time origin of the resulting video is set to the real moment the record button was hit or triggered.
For example we are filming a golf swing for a total duration of 2.5 seconds and a delay of 1.5 seconds.
The recording is started via audio trigger when the club hits the ball.
The first image of the video will correspond to what the camera was filming 1.5 seconds before the club hit the ball.
The time origin in the metadata file will be set to the club-ball impact. All of the action happening before the impact will be timestamped with negative numbers.
The path to save the recording and the file name of the output video are based on the options under Options ‣ Preferences ‣ Capture ‣ Video naming.
The duration of the recording is either set manually when using the stop recording button, or automatically when using the Stop recording by duration option in Options ‣ Preferences ‣ Capture ‣ Automation.
Recordings can be compressed or uncompressed based on the option set in Options ‣ Preferences ‣ Capture ‣ General.
Recordings can ignore or take into account the live delay based on the recording mode set in Options ‣ Preferences ‣ Capture ‣ Recording.
For high speed cameras the framerate set in the metadata of the output video can be customized by configuring a replacement framerate in Options ‣ Preferences ‣ Capture ‣ Recording.
When the recording process is not fast enough to sustain the camera framerate, images are skipped (dropped) and not added to the output video.
This can corrupt time measurements made on the output video, as this assumes stable frame rate.
In order to make the most out of the camera without any frame drops, it is important to identify the bottlenecks and configure the camera and Kinovea according to your requirements and the trade offs you are interested in.
Use the infobar to get feedback about performances.
The recording mode Retroactive should not yield any frame drops but the duration of the output videos is limited by the amount of memory allocated for the delay buffer.
Furthermore, this mode renders the camera unavailable for a small duration after the recording, while Kinovea performs the actual export.
The other two modes record on-the-fly and can record for arbitrary long periods (based on storage space) and without post-recording pause. However they require more configuration if the camera does not already produce compressed images.
For cameras that do not produce an already compressed stream, Kinovea may compresses the images on the fly on the CPU. This process is usually slow compared to the frame rate that the camera can sustain.
You can disable compression on recordings under Options ‣ Preferences ‣ Capture ‣ General.
When compression is disabled, the amount of data to store on file is between 5 to 10 times larger.
In this configuration, the speed of the storage medium might become the new bottleneck.
Solid state drives (SSD) are very much recommended instead of hard drives (HDD). NVMe SSD offer even better performances.
Ultimately it is also possible to configure a RAM Drive to further increase the storage speed.
It is possible to setup Kinovea to record and replay videos multiple times in a row without manual interaction.
To do this set the recordings to start from the audio trigger and stop from the recording duration preset.
Add a replay folder observer monitoring the capture folder, this will automatically open and play the last recorded video.
Replay folder observers are special playback screens that constantly monitor a specific folder for new videos.
When a new video is created in the monitored folder, the playback screen automatically loads it and starts playing it.
This can be used to automate the capture and replay the feedback loop.
The replay folder observer works at the file system level and is independent from the source or process creating the video.
The videos can be created by the same or another instance of Kinovea, or by an external process.
Apart from the monitoring mechanism, the video is loaded normally.
In particular the annotation file created by the capture screen is loaded and drawings created on the capture side are imported.
You can create a replay observer from the following places:
From the menu File ‣ Open replay folder observer…. This opens a folder selection dialog.
In the files and shortcuts tabs of the explorer panel, right click a folder or a file and choose Open as replay folder observer.
In the capture tab, right click a file in the capture history and choose Open as replay folder observer.
In the playback screen, right click the main video image and choose Open a replay folder observer…. This opens a folder selection dialog.
In the capture screen, in the recently captured files list, right click the thumbnail of a file and choose Open as replay folder observer.
The recently opened folder observers are listed under the menu File ‣ Recent.
Note
The menus on files are opening the observer on the parent folder of that file.
This observer will immediately load the most recent file of the folder which is not necessarily the file used to start it.
When manually loading a video into a replay folder observer, the observer is not deactivated and will continue to monitor the folder.
In order to deactivate the monitoring and loading of new videos, you can turn the folder observer into a regular playback screen by clicking the observer icon in the infobar of the screen.
Conversely, in order to turn a normal playback screen into a replay folder observer, click the video icon in the infobar of the screen.
The annotations created on the capture screen are stored in a companion KVA file next to the recordings.
When the recorded video is opened in Kinovea the annotations are loaded and can be modified.
Because the capture screen is simultaneously monitoring the annotation file it last exported, any changes to the annotations on the playback side will be reflected into the capture screen.
The next recording from this capture screen will then use the updated annotations.
To export the current working zone as a new video use the Video export button,
the menu File ‣ Export video, or the context menu Export video.
This will open an export dialog with an option to take the slow motion into account.
If checked the resulting video will use the current speed slider to determine the output framerate.
The output video format can be selected in the file name selection dialog between MKV (Matroska), MP4, or AVI.
The video codec is always MJPEG.
Note
Drawings added to the video are painted on the output images and will no longer be modifiable.
Comments added at the key image level and other non directly visible annotations are not saved in the output.
Tip
To minimize the loss of information only export the video when absolutely necessary and otherwise keep the original video and save the annotations in a separate KVA file.
To export a series of images from the video, use the Export sequence button in the playback screen.
This brings up a dialog to configure the frequency at which the images are taken from the video.
If the checkbox Export the key images is checked, the frequency slider is ignored and only the key images are exported.
When using two playback screens, use the Export video or Export image buttons in the joint controls to create a single output containing both input side by side.
The input videos will be combined frame by frame using the configured synchronization point.
To export measurements such as point positions, linear velocities, or angular accelerations, use the kinematics dialogs.
The kinematics dialogs are found under the Tools menu: Scatter diagram, Linear kinematics, Angular kinematics, Angle-angle diagram.
To export data use the export options at the bottom right of the dialogs.
This will export the data in CSV format, either to the clipboard or to a file.
The first column is the time, either in milliseconds or normalized, and the other columns are the data sources.
The measurements displayed and exported from the kinematics dialog use the filtered coordinates.
The filtering process is described in the About box of the Linear kinematics dialog.
You can control filtering from the preferences at Options ‣ Preferences ‣ Drawings ‣ General ‣ Enable coordinates filtering.
Another option to export data is to use the converters menus under File ‣ Export to spreadsheet.
The following options are available:
LibreOffice (.odf)
Microsoft Excel (.xml)
Web (.html)
Gnuplot (.txt)
The underlying mechanism for these menus is to convert the annotation data into the output format: it does not perform any higher level computations.
This approach has the following differences with the export from the kinematics dialogs:
The coordinates do not use filtering.
Only the coordinates are exported, not any higher level measurements like speed or acceleration.
Each object is exported independently in its own table.
Key images times are exported.
Stopwatches are exported.
The time column uses the configured timecode format and may not be numerical in nature.
To use Gnuplot to plot the trajectory data on a 3D graph with time as the third dimensions, you can use the following commands:
If you used Kinovea in your research we would very much appreciate it if you included it in your bibliography.
You can find examples of formatted citations in the About dialog under the Citation tab.
Note
Kinovea is an open source project and is not published by a company,
thus there is no meaningful “city” or “country” of origin as is sometimes requested by journals for software references.
Kinovea does not store any personal data and does not transmit anything over the internet.
It is a standalone desktop application, similar to a media player with extra functions,
and does not require the Internet to function.
The only aspect of the program that makes use of the Internet is the menu “Check for updates…”,
which consults a file on the Kinovea web server and reports if a new version is available or not.
This question comes up when a scientific journal asks for bibliographic references.
Kinovea is not a company and does not have headquarters in any city.
There is no meaningful way to answer this question.
Please ask the journal about their dedicated format for citing Open Source software.
You may also check the built-in citations examples in the About dialog.
It is not the place of Kinovea or its authors to suggest one camera brand over another.
It is best for the project to stay neutral.
Besides, there are too many parameters specific to each use-case that it is not possible to answer this question with anything else than “it depends”.
Please register on the forum and ask the question there, providing as many details and context as possible, this way other users can share their experience directly.
7. Why is the speed of my video slowing down by itself?
The speed slider is forced down when the original frame rate of the video cannot be sustained by the player.
To check if a new version has been published from within the program itself you can use the menu Help ‣ Check for updates.
This function requires an Internet connection.
The function simply consults a text file on the web server that contains the version number of the latest published version.
There is no automatic update mechanism, to update you must download and install the new version manually.
Name of this instance of Kinovea. Used in the window title and to select a preference file.
-hideExplorer
The explorer panel will not be visible. Default: false.
-workspace<path>
Path to a Kinovea workspace XML file. This overrides other video options.
To create a workspace file use the menu Option ‣ Workspace ‣ Export workspace.
-video<path>
Path to a video to load.
-speed<0..200>
Playback speed to play the video, as a percentage of its original framerate. Default: 100.
-stretch
The video will be expanded to fit the viewport. Default: false.
While Kinovea is running it is possible to send it commands from a third party application or script.
For example this can be used to trigger recording based on a software event.
The commands use the Windows messaging system and the WM_COPYDATA message.
The exact way to create these Windows messages depends on the programming language or platform used to write the third party application.
Each command has to be sent separately. There is no return value.
The general format of the messages is the following:
Kinovea:<Category>.<Command>
The list of categories and commands is available under Options ‣ Preferences ‣ Keyboard.
This mechanism works even if Kinovea is not in the foreground.
If there are multiple instances of Kinovea running you will need to send the message to the correct instance based on its name.