Kinovea es una herramienta de anotación de video diseñada para el análisis del movimiento.
Cuenta con utilidades para capturar, ralentizar, comparar, anotar y medir el movimiento en videos.
Las secciones a continuación y la tabla de contenido en la barra lateral deberían permitirle acceder a la documentación de su tema de interés.
También puede utilizar la función de búsqueda en la esquina superior izquierda.
El panel del explorador se encuentra en el lado izquierdo de la ventana principal de Kinovea.
Proporciona acceso al sistema de archivos y cámaras conectadas.
Para mostrar u ocultar el panel del explorador, use el menú View ‣ Explorer panel, o el atajo F4.
El panel del explorador contiene las siguientes pestañas:
El explorador de archivos muestra una vista de árbol de las carpetas en su computadora.
La parte inferior de la pestaña muestra una lista de archivos admitidos que se encuentran en la carpeta seleccionada.
El explorador de accesos directos muestra una vista de árbol de las carpetas que se marcaron como favoritas.
Para agregar un acceso directo a una carpeta en esta lista, haga clic derecho en la carpeta seleccionada en la pestaña del explorador de archivos o en la pestaña de acceso directo y elija Add to shortcuts.
También puede usar la barra de herramientas de accesos directos en el panel de accesos directos para agregar y eliminar accesos directos.
El panel de miniaturas se muestra cuando no hay pantallas de captura o reproducción cargadas.
Muestra miniaturas correspondientes a la pestaña activa del panel del explorador.
En el caso del explorador de archivos y las pestañas de acceso directo, muestra miniaturas de los archivos en la carpeta seleccionada.
En el caso del explorador de cámaras, muestra miniaturas de todas las cámaras conectadas a la computadora.
Los botones del explorador cambian la pestaña activa en el panel del explorador.
El botón más a la derecha habilita o deshabilita el modo de pantalla completa.
Las miniaturas de los archivos de video contienen varios fotogramas del video.
Los marcos adicionales se pueden ver usando las secciones en la parte inferior de la miniatura.
Algunos metadatos se pueden superponer en las miniaturas de videos o imágenes.
Para configurar qué metadatos se superponen, haga clic con el botón derecho en el fondo del explorador para que aparezca el menú contextual.
Habilitación de los metadatos Kva file hará que muestre «kva» en la miniatura cuando existan anotaciones como un archivo adicional al video principal.
El panel de pantallas reemplaza al panel de miniaturas cuando se abren una o más pantallas.
Cuando se abren dos pantallas del mismo tipo, se agregan controles conjuntos en la parte inferior.
Se pueden agregar y eliminar pantallas usando la barra de herramientas principal, el menú Ver, o los botones de cierre en las pantallas mismas.
Para intercambiar las pantallas utilice el menú View ‣ Swap screens o el botón de intercambio en los controles conjuntos.
Se pueden encontrar más detalles sobre la interfaz de usuario de la pantalla de reproducción en Playback screen user interface
Se pueden encontrar más detalles sobre la interfaz de usuario de la pantalla de captura en Capture screen user interface
Un espacio de trabajo en Kinovea es una disposición específica de pantallas y su contenido.
Cuando Kinovea se inicia usando un espacio de trabajo, recargará las pantallas, reabrirá videos y cámaras, y reiniciará los observadores de la carpeta de reproducción.
Puede guardar la disposición actual de las pantallas como espacio de trabajo predeterminado utilizando el menú Options ‣ Workspace ‣ Save as default workspace.
La próxima vez que se inicie Kinovea, leerá este espacio de trabajo y volverá a cargar los videos y las cámaras en consecuencia.
Para eliminar el espacio de trabajo predeterminado y hacer que Kinovea se inicie normalmente en el explorador de miniaturas, use el menú Options ‣ Workspace ‣ Forget default workspace.
Los espacios de trabajo se pueden exportar a archivos XML separados usando el menú Options ‣ Workspaces ‣ Export workspace.
Para iniciar Kinovea utilizando un archivo de espacio de trabajo explícito, debe pasarse como argumento a la línea de comando.
Es posible ejecutar múltiples Kinovea al mismo tiempo en la misma computadora.
Esto se puede usar para grabar más de dos cámaras, reproducir más de dos videos al mismo tiempo o crear configuraciones más avanzadas para captura y reproducción o escenarios de instrumentación.
Nota
De forma predeterminada, cada instancia tiene su propio conjunto de preferencias separadas de las demás.
Este comportamiento está controlado por la opción bajo Preferences ‣ General ‣ Instances have their own preferences.
Esta opción solo se puede cambiar desde la primera instancia de Kinovea.
A cada instancia se le asigna un número en orden secuencial de lanzamiento y, de forma predeterminada, este número se convierte en el nombre de la instancia.
El nombre de la instancia se puede ver en la barra de título de la ventana entre corchetes.
Es posible personalizar el nombre de las instancias pasando el nuevo nombre en el -name argumento en la línea de comando.
Los argumentos de la línea de comandos también se pueden especificar creando un acceso directo de Windows en Kinovea.exe y editando el Target campo en las propiedades del acceso directo.
To open a video in Kinovea do any of the following:
Drag and drop the file from Windows explorer onto the Kinovea window.
Use the menu File ‣ Open and select the file.
Use the menu File ‣ Recent and select a recently opened file.
Use the explorer panel to locate and double click on a file in the file explorer, the shortcut explorer, or the recently captured files, or drag and drop it into an opened playback screen.
Double click on a thumbnail in the thumbnail explorer.
Kinovea comes equipped with video decoding libraries to supports many file types and does not need nor use any third party codecs.
It supports the following:
Most video file formats, including MP4, MPEG, MOV, AVI, MKV, M4V, FLV, WMV, among others.
Simple image files such as PNG, JPEG and BMP.
Animated GIFs.
SVG vector images.
Image sequences.
Image sequences are collections of individual image files that have the same size and are named following a numbering pattern. They are loaded as a video.
The drawings toolbar contains buttons to create new key images, select the active tool and open the color profile.
The toolbar contains more tools than those immediately visible.
Buttons that host extra tools have a little black triangle in the top-left corner.
The extra tools can be accessed by right-clicking or long-clicking the primary button.
The working zone defines the segment of the video that the player is working with.
The play head loops within the working zone.
Marks the current time as the time origin. This makes time values relative to this moment.
Locks the working zone start and end point to avoid changing them by mistake.
Sets the starting point of the working zone within the video.
Sets the ending point of the working zone within the video.
Resets the working zone to the whole video.
You can also update the working zone boundaries by directly manipulating the blue end points.
Truco
If the amount of data fits in the cache memory, the working zone will be loaded in memory.
This improves playback performances and enables the Video ‣ Overview and Video ‣ Reverse menus.
The cache memory can be configured under Options ‣ Preferences ‣ Playback ‣ Memory.
The speed slider goes from 0 to twice the nominal speed of the video.
The displayed speed value takes into account the slow motion factor configured such that the speed is shown as a percentage of the real world action speed.
For example if a video is filmed at 240 fps and saved into a file as 24 fps, the video will normally play back at 10% of the real world speed.
In this case the speed control will go from 0 to 20% with a mid-point at 10%.
Advertencia
If the video cannot be played back at its nominal speed for performance reasons the playback speed value will automatically be lowered down.
Performance for playing back depends on the displayed image size, the frame rate and the file format.
To change the aspect ratio of the image use the menus under Image ‣ Image format.
Some devices use non-rectangular pixels and don’t fill the corresponding pixel aspect ratio value in the file metadata.
In these case it might be necessary to force the aspect ratio to a known value.
To deinterlace the video use the menu Image ‣ Deinterlace.
Some capture devices store video using an interlaced format.
Interlaced videos store half images at twice the frame rate, alternating odd and even rows.
This causes a combing artifact when the filmed motion is fast as objects or subjects move during the half frame interval.
The deinterlacing algorithm reconstructs full images by combining rows from adjacent frames.
When a video is captured with a high speed camera (or using the high speed or slow motion function of a smartphone),
the generated video file often has a frame rate different from the capture frame rate.
For example a video filmed at 1000 fps may be saved to a file with a more typical playback frame rate of 25 fps.
In this case the video will play back slower than real time, which is expected,
but the time-related information and calculations would be erroneous if they were based on the playback framerate.
To work with this type of video it is important to configure (or «calibrate») the time scale.
This is done by going to Video ‣ Configure video timing and filling the capture frame rate.
Changing this option does not change the nominal speed at which the video is played back.
In other words setting the speed control at its mid-point will still play back the video at the same slow motion rate as before.
Instead, this option changes the time coordinates of the images.
In some cases a video is saved with a frame rate which is just plain wrong, or Kinovea cannot read it.
For example a USB camera might claim that it is capturing video at 25 fps but the video stream is actually transfered at 15 fps.
In this case the video will play back at the wrong speed and the time calibration will be wrong.
If you know the real playback frame rate at which the video is supposed to be played back, you may enter it in Video ‣ Configure video timing.
Changing this option does change the nominal speed at which the video is played back.
Two videos can be synchronized by setting their time origin to a common event visible on both videos.
When the videos are synchronized they will pass through their time origin at the same time.
To set the time origin in a video move to that point in the video
and click the Mark current time as time origin button or right click the background of the video and choose the Mark current time as time origin menu.
Alternatively you can move each video to the correct point independently and use the Synchronize videos on the current frames button in the joint controls area.
During joint-playback, the synchronization mechanism means one video may start and/or end playing before the other.
To perform joint frame-by-frame navigation, move the cursor in the joint timeline or use the joint controls buttons.
By default the speed controls in each playback screen are linked with each other:
lowering the playback speed in one video lowers it in the other.
The speed controls are independent of the frame rate of the video files so
this should apply a similar slow motion factor to both videos and keep the comparison meaningful.
Truco
If one of the video was captured with a high speed camera and has a different capture frame rate,
this frame rate should be configured for this video via menu Video ‣ Configure video timing.
Once this configuration is done both controls will still be coherent with each other.
If you are confident that you do not want the speed sliders to be linked together you may change the option in Options ‣ Preferences ‣ Playback ‣ General ‣ Link speed sliders when comparing videos.
Superposition paints each video on top of the other at 50% opacity.
This is a basic mechanism to compare motion in-situ if the videos were filmed in the same environment with a static camera.
The overview mode is a special video mode used to display multiple images of the video at the same time.
This type of display is also called a «kinogram».
It can be used to create a single-image summary of the whole video sequence.
To enter the overview mode, use the menu Video ‣ Overview.
Truco
The overview mode is only enabled when the entire working zone is cached in memory.
To change the amount of memory used for the cache go to Options ‣ Preferences ‣ Playback ‣ Memory.
When the Overview mode is active the main image viewport displays a collection of frames taken from the video at regular intervals.
The playback controls are disabled.
To control the number of images displayed use the mouse wheel. The display can be set between 2x2 and 10x10 images.
To quit the Overview mode, use the close button in the top-right corner of the playback screen.
The reverse mode is a video mode where the video frames are re-ordered backwards.
To enter the reverse mode, use the menu Video ‣ Reverse.
Truco
The reverse mode is only enabled when the entire working zone is cached in memory.
To change the amount of memory used for the cache go to Options ‣ Preferences ‣ Playback ‣ Memory.
To quit the Reverse mode, use the close button in the top-right corner of the playback screen.
Annotation tools are used to add drawings and text to images of the video.
Some tools can also be used to measure distances or display coordinates.
Drawings are attached to a specific key image.
Deleting the key image deletes all the drawings attached to it.
Drawings are vector-based: they can be modified after they have been added to the video.
Drawings have a context menu that can be used to access style options, visibility configuration, tool-specific functions, tracking management, copy and paste, support and deletion.
While a tool is active, right clicking the viewport opens the color profile at the page of the active tool.
There are more tools than those immediately visible.
Buttons with a small arrow in the top-left corner contain other tools that can be accessed by doing a right click or a long click (click and hold) on the button.
A flying menu opens with the extra tools available.
The hand tool is used to manipulate drawings or pan the whole image.
To stop using a particular tool and come back to the hand tool use the Escape key or click the hand tool button.
Truco
You can also use the middle mouse button to directly manipulate drawings without changing back to the hand tool.
Key image
Adds a new key image.
Commentary
Opens the commentary dialog to attach a paragraph of text to the key image using the rich-text editor.
Color profile
Opens the color profile dialog to change the default style of drawings.
The magnifier function creates a picture-in-picture effect with an enlarged version of the current image displayed within the original image.
This is a display mode rather than a normal drawing tool, it is not saved in the KVA file.
Drawings have styling properties like color, size of text or type of line. The exact set of properties depends on the tool.
New drawings use the default style configured for this type of drawing.
The default style can be changed in two ways:
Open the color profile by clicking the color profile button or right clicking the viewport while the corresponding tool is active.
Right click an existing drawing and use the menu Set style as default.
To change the style of a drawing after it is created right click it and select the Configuration menu.
This dialog also lets you change the name identifying the drawing.
Certain tools that are presented as separate entries in the tool bar are actually style variants of each other.
For example the presence of arrows at the end of lines is merely a style option.
This means it is possible to convert a line into an arrow by changing the arrow style option.
Drawings have visibility properties that controls their opacity throughout the video.
New drawings start with the default opacity options set in Options ‣ Preferences ‣ Drawings ‣ Opacity.
Each drawing can then be configured to use different visibility options.
In general terms drawings have a fade-in ramp, an opaque section and a fade-out ramp.
The default options make drawings fully visible on their key image and fade in and out of the neighboring frames.
When drawings are tracked they stay opaque during the section of video where they are tracked.
With this option the drawing uses a custom configuration defined through the Configure custom fading dialog.
Maximum opacity (%)
This option controls the opacity used during the opaque section.
A value of 100 % means the drawing will not let the background show through.
A value less than 100 % means the drawing will be somewhat transparent.
Opaque duration (frames)
This option controls how long the drawing stays at its maximum opacity level before fading out. This section starts at the keyframe onto which the drawing was added.
Fading duration (frames)
This option controls the duration of the ramps before and after the maximum opacity until the drawing becomes completely invisible.
Time information in Kinovea is relative to a specific point in the video.
By default, this is the start of the video or working zone.
To set the time origin to an arbitrary point in the video navigate to this point and use the Mark current time as time origin button or context menu.
All displayed times will be relative to this origin, using negative time before the event and positive time after it.
Setting the time origin to another point of the video can be useful to quickly get timing information in relation to a specific event,
for example, a golf ball impact, a pitcher’s release point, a long jumper’s take-off, a starting gun trigger.
In addition to opening image files as if they were videos, you may import images and vector drawings into videos to create a picture in picture effect.
To import an image file in the current video use the menu Tools ‣ Observational references ‣ Import image….
To import an image that you have copied in the clipboard in another application,
right click the video background and use the menu Paste image from clipboard.
To transfer an image from one screen to the other, use the menu Copy image to clipboard and then paste it in the other video.
Annotations are saved in KVA files. These are XML files with a .kva extension.
KVA files store data for key images, comments, drawings, trajectories, chronometers, time origin, tracked values, and the coordinate system calibration.
To save the current annotations use the menu File ‣ Save or the shortcut CTRL + S.
Truco
Save the annotation file with the same name as the video to have it automatically loaded the next time you open the video in Kinovea.
This is known as a «sidecar» file, for example video.mp4 and video.kva. This is the default option when saving.
To load an annotation file into an opened video use the menu File ‣ Load annotations.
The imported annotations are merged together with the existing annotations you might have already added to the video.
If the imported annotations were created on a different video, a conversion step may take place to adapt the drawings dimensions and time codes to the new image size and frame rate.
Loading external annotations can be used to import calibration settings between videos filmed during the same session without having to perform the calibration for every video.
To automatically import a specific annotation file into every video,
use Options ‣ Preferences ‣ Playback ‣ General ‣ Default annotation file…, and point it to the KVA file you want to be loaded.
The result of running the OpenPose program on a video is a set of JSON files containing data for one or more human postures.
OpenPose uses a 25-point body model.
This is not meant to be used for measurements but for general posture assessment.
The workflow to import OpenPose data into Kinovea is the following:
Run the OpenPose software on the video, using the write_json option.
This creates a set of .json files in the output directory.
Each file contains descriptors for the detected poses.
In order to use Kinovea to make measurements on the video, it is necessary to calibrate the transformation of pixels in the image into real world units.
Kinovea supports two calibration mechanisms: calibration by line and calibration by plane.
All measurements in Kinovea must sit on a 2D plane.
If the motion you want to study is on a plane parallel to the image plane (orthogonal to the camera optical axis), you may use calibration by line.
Otherwise, if you are measuring points on the ground, for example, you should use the calibration by plane.
If the motion is happening in arbitrary 3D space you cannot measure it in Kinovea.
Line calibration is possible when the motion is sitting on a 2D plane parallel to the camera plane.
To perform line calibration follow these steps:
Have an object of known length visible in the video.
Add a line object and place it on top of the object of known length.
Right click the line and select the Calibrate menu.
Enter the real-world length of the object.
Nota
Modifying the calibration line after calibration already took place will update the calibration to use the new pixel length of the line
and keep the real-world length as configured.
Plane calibration is possible when the motion is sitting on an arbitrary 2D plane visible in the video.
To perform plane calibration follow these steps:
Have a rectangle of known dimensions visible in the video.
Add a perspective grid object and move its corners to match the rectangle.
Right click a corner and select the Calibrate menu.
Enter the real-world width and height of the rectangle.
Nota
Modifying the calibration plane after calibration already took place will update the calibration to use the new pixel dimensions of the plane
while keeping the real-world dimension as configured.
Mount the camera on a tripod and avoid camera motion
The camera must remain stationary for the images to provide a stable frame of reference.
If the camera is moving relative to the scene, the plane of motion will change over time
and the calibration from one video frame cannot be used on other frames.
Truco
If you do not control the camera and it is moving, you can try to track the calibration object itself.
To minimize perspective distortion. place the camera as far as possible from the scene and zoom in.
This will reduce errors due to points moving in and out of the plane of motion.
To align the axes of the coordinate system with the real world use a plumb line or other object of known direction as the calibration object.
If the real world vertical line is not parallel to the image side you can set the calibration line to define the vertical axis.
Avoid measuring objects outside the plane of motion
Everything that is measured must sit on the calibrated plane of motion.
Coordinates and measurements using points physically outside the plane of motion will be inaccurate.
The first placed point of the line becomes the default origin of the coordinate system.
The line direction becomes an axis of the coordinate system based on the option selected in Coordinate system alignment.
The following alignment options are available:
The line defines the horizontal axis
The line defines the vertical axis
Aligned to image axes
If Align to image axes is selected the orientation of the line is ignored.
The bottom-left corner of the grid becomes the origin of the coordinate system.
The coordinate system axes are aligned with the calibration grid object.
When using cameras with significant distortion is it important to calibrate it in Kinovea before making measurements.
The lens distortion calibration rectifies the coordinates before they are passed to the spatial calibration.
Barrel distortion exposed by a GoPro action camera and the default coordinate system after lens distortion calibration.
Lens calibration is compatible with both line calibration and plane calibration.
The coordinate system, lines and grid objects are drawn distorted to follow the distortion.
The images themselves are not rectified.
The description of the lens distortion is done in the lens calibration dialog in the menu Tools ‣ Camera calibration.
There are multiple approaches to finding coefficients that best match a particular camera lens and camera mode, they are described below.
All the approaches involve filming a highly structured pattern in order to facilitate the distortion estimation, whether it is done visually or programmatically.
This pattern can be a chess board, a brick wall, or any structured image displayed on a flat screen.
The following parameters are used during calibration:
In this approach the distortion is estimated visually by directly changing the distortion coefficients and trying to align the grid on the image.
It is the least accurate approach but can still provide decent results.
Follow these steps:
Film the pattern straight on.
Load the video and open the lens calibration dialog.
Tweak k1 and k2 distortion coefficients to match the distorted grid image.
k1 is the distortion of the first order and should be used as a starting point to get the grid to align as best as possible in the central region of the image.
k2 can be used to counteract the main distortion at the periphery of the image.
You can play with the cx, cy, p1, p2 parameters as well to adjust the grid to the image.
In this approach the distortion is estimated programmatically by Kinovea from a set of distortion grid objects manually placed over images of the video.
Follow these steps:
Film a chess board or structured pattern from various angles.
Select some images (for example five) and add a Distortion grid object.
Map each point of each grid onto corners on the filmed pattern.
Open the lens calibration dialog and click the Calibrate camera button in the lower left.
This will compute and fill the distortion parameters.
The distortion parameters are saved in the KVA file but if you want to re-use the same parameters on a different video you can export them to a separate file.
Use the menus File ‣ Save and File ‣ Open in the lens calibration dialog.
Nota
Any change of camera model, lens, or configuration options involving image resolution or zoom requires a new calibration procedure.
The time unit can be changed from the menu Options ‣ Time or from Options ‣ Preferences ‣ Playback ‣ Units ‣ Time.
The following options are available:
Format
Example
Description
[h:][mm:]ss.xx[x]
1:10.48
Textual timecode.
Frame number
1762
Rank of the current frame.
Total milliseconds
70480 ms
Integer number of milliseconds.
Total microseconds
1284 µs
Integer number of microseconds.
Ten thousandth of an hour
904
Ten thousandths of an hour
Hundredth of a minute
542
Hundredths of a minute
[h:][mm:]ss.xx[x] + Frame number
1:10.48 (1762)
Nota
The configured time unit is used for the time position and duration, the clock and stopwatch tools, and when exporting to spreadsheets.
The kinematics dialog always uses total milliseconds as the time unit.
Times displayed are relative to the time origin defined for the video.
By default the time origin is at the begining of the video but you can change it
in order to display times relative to a particular event of the video; such as a ball impact,
a jump take-off, a release point, a race start, etc.
Video positions before the time origin have negative times.
The time origin can be changed manually by using the time origin button or by right clicking the image and selecting Mark current time as time origin.
The current video frame becomes the time origin.
To annotate the video with the current video time use the clock tool .
By default a clock object simply displays the current video time using the global time origin.
Each clock tool may also have its own custom time origin independent from the global one.
To define a custom time origin Right click the clock tool and choose Mark current time as time origin for this clock.
The clock object can be identified by name by using the Show label menu and changing the object name in its configuration dialog.
Videos filmed with high speed cameras or in high speed mode can have a video frame rate different from the capture frame rate.
For example a video filmed at 1000 fps is typically saved with a «standard» playback rate of 30 fps.
This makes the final video appear in slow motion even when the speed slider is set to 1x.
In this case it is important to tell Kinovea about the original capture frame rate for times to be correct.
This impacts time positions and time intervals.
Open the video timing dialog from menu Video ‣ Configure video timing…
and in the top part of the dialog, enter the capture frame rate.
Nota
When capturing videos with Kinovea this option is automatically set based on information found in the KVA file saved next to the recording.
To measure an angle, add an angle object and position its end points.
By default the angle runs counter-clockwise, starting from the dashed leg.
The context menu options let you switch between signed or unsigned angle, change from counter-clockwise to clockwise, and switch to supplementary angle.
Nota
Always keep in mind that it is not possible to measure angles in arbitrary space from a 2D image.
Angles can only be measured when the three points lie on a known 2D plane.
This must be either the image plane or the plane calibrated using plane calibration.
The goniometer tool lets you measure the extension or flexion of a body segment relative to a referenced anatomical angle or neutral position.
A physical goniometer combines two arms and a protractor.
One arm is called the stationary arm and the other is called the movable arm.
The protractor part contains multiple graduated rings that allow the physician to pick a reference axis when reading the angle.
The stationary arm is aligned with the reference segment to materialize the neutral position and the movable arm is aligned with the segment for which we are measuring the range of motion.
Using a goniometer instead of a simple protractor makes it easier to align the protractor with the body segments
and helps standardize the way the measurements are made for the range of motion of specific joints.
In Kinovea, the goniometer tool has three branches:
The stationary arm is the thick plain arm.
The movable arm is the one with the arrow at the end.
The dashed line is used to define the protractor reference axis in relation to the stationary arm.
This branch rotates by 45° increments relatively to the stationary arm.
Plantar flexion. The reference axis is set perpendicular to the leg.
This tool is conceptually similar to a real goniometer with 8 protractor rings but the reading is simplified by showing only one measurement at a time.
The angle-to-horizontal and angle-to-vertical tools let you measure angles relatively to the horizontal or vertical axes of the image.
The dashed line represent the reference axis.
To track the trajectory of a single point or body joint visible on the image follow these steps:
Right click the object to track and choose Track path.
Move the video forward using the Next frame button, the mouse wheel or the Play button.
Adjust the point position as necessary during the path creation.
To finish tracking, right click inside the tracking window and choose End path edition.
Tracking is a semi-automatic process. A candidate point location is computed automatically but can be adjusted manually at any time.
While tracking is in progress two rectangles will be visible around the object being tracked.
The inner rectangle is the object window, the outter rectangle is the search window.
When the automated tracking fails, correct the point location by dragging the search window. Drag it until the cross at the center of the tracking tool is at the correct location.
When tracking resumes, it will use this new point as reference.
Truco
When the trajectory object is not in path edition mode the trajectory is interactive: clicking on any point of the trajectory will move the video to the corresponding frame.
This option enables the display of a measurement label following the current point on the trajectory.
The options are kinematics quantities computed from the trajectory points.
This option is also directly available via the trajectory context menu under Display measure.
The following options are available:
None
Name
Position
Total distance
Total horizontal displacement
Total vertical displacement
Speed
Horizontal velocity
Vertical velocity
Acceleration
Horizontal acceleration
Vertical acceleration
Nota
To display kinematics measurements in real world units you must first calibrate the coordinate space.
If the video is natively in slow motion you must also calibrate the time scale.
This option uses the points of the trajectory to compute the best-fit circle of the trajectory.
This is a circle that minimizes the error of each point relatively to a virtual perfect circle.
This can be used to visualize the pseudo-center of a rotary motion.
The size of the object and search windows can be modified by dragging the corners of the windows in the preview panel or by changing the values.
The object window should be as small as possible around the point of interest to avoid tracking interferences.
The search window should be large enough to contain the position of the point in the next frame,
but small enough to avoid interferences between multiple markers.
When the section of time covered by the trajectory contains key images they are displayed as small labels attached to the trajectory point at that time.
Unlike the trajectory tool the sizes of the search and object windows cannot be directly modified from the object configuration dialog.
In order to change the default sizes for these windows go to Options ‣ Preferences ‣ Drawings ‣ Tracking.
Truco
Use the trajectory tool configuration dialog to visually figure out the appropriate size of the object and search window, then enter these parameters in the preferences.
The calibration mechanism uses a line object or a plane object to define a coordinate system and transform pixel coordinates into real world coordinates.
If the object defining the calibration is itself tracked, the calibration will be updated at every frame.
This can be used to compensate a moving camera.
It is also possible to track the coordinate system origin while keeping the calibration static.
This can be used to obtain coordinates of a point relatively to a moving reference.
Nota
Tracking the calibration object and tracking the coordinate system are not compatible with each other. The tracked calibration object redefines the coordinate system.
To measure linear or angular speed and other kinematics quantities follow these steps:
Establish a line or plane calibration.
Compensate for lens distortion.
Track a point trajectory or track an object containing an angle.
The measured data can be displayed in two ways:
Directly on the object.
In a dedicated kinematics diagram.
To export the data, use the export options in the kinematics diagrams.
To display the measurement as a label attached to the object, right click the object and choose an option under the Display measure menu.
Each measurable object has specific options based on the quantities it can measure.
Nota
The data displayed directly on objects use raw coordinates whereas the kinematics diagram uses the (optional) filtering mechanism.
The data can be exported to an image or to tabular data.
For tabular data the points are sorted by time; the first column is the time in milliseconds, the second and third columns are the X and Y positions.
The data source list contains a list of all tracked points.
Data from individual points can be hidden from the line chart by unchecking the checkbox in front of the point name.
Points in drawings with multiple points are listed by the name of their drawing and a point identifier.
The Data type drop down sets the kinematic quantity used in the line chart.
The following options are available:
Horizontal position
Vertical position
Total distance
Total horizontal displacement
Total vertical displacement
Speed
Horizontal velocity
Vertical velocity
Acceleration
Horizontal acceleration
Vertical acceleration
Total horizontal displacement and total vertical displacement computes the resultant displacement between the first point and the current point of the trajectory.
This is similar to the horizontal and vertical position but makes it relative instead of absolute.
Nota
Be mindful that acceleration values are very sensitive to noise as they are the second derivative of the digitized position.
The data can be exported to an image or to tabular data.
For tabular data the points are sorted by time; the first column is the time in milliseconds, the other columns are values from the time series.
The data source list contains a list of all tracked angles.
Data from individual angles can be hidden from the line chart by unchecking the checkbox in front of the point name.
The data can be exported to an image or to tabular data.
For tabular data the points are sorted by time, the first column is the time in milliseconds, the other columns are values from the time series.
The angle-angle diagram is a qualitative plot showing the dynamics of a movement pattern over multiple trials.
This type of diagram correlates two angular measurements over time to highlight their interaction.
For example, this can be used to study the coordination of the knee and hip during flexion or extension.
The data can be exported to an image or to tabular data.
For tabular data, the output has only two columns, correlating the first angle value with the second value at the same point in time.
Due to the digitization process the raw coordinates are noisy and the resulting quantities, especially derivatives like speed and acceleration, are less accurate than they could be.
Carefully filtering the coordinates remove a lot of this noise and provide more accurate measurements.
Data shown in the kinematics diagrams is computed using filtered coordinates.
This filtering can be disabled under Preferences ‣ Drawings ‣ General ‣ Enable coordinates filtering.
The coordinates are passed through a low pass filter to remove noise.
The filter does two passes of a second-order Butterworth filter.
The two passes (one forward, one backward) are used to reset the phase shift 1.
To initialize the filter, the trajectory is extrapolated for 10 data points on each side using reflected values around the end points.
The extrapolated points are then removed from the filtered results 2.
The filter is tested on the data at various cutoff frequencies between 0.5Hz and the Nyquist frequency.
The best cutoff frequency is computed by estimating the autocorrelation of residuals and finding the frequency yielding the residuals that are the least autocorrelated.
The filtered data set corresponding to this cutoff frequency is kept as the final result 3.
The autocorrelation of residuals is estimated using the Durbin-Watson statistic.
For trajectories, the cutoff frequency can be visualized in the About tab of the diagram dialog.
The calculated cutoff frequency depends on the data and is different for each trajectory object.
Challis J. (1999). A procedure for the automatic determination of filter cutoff frequency for the processing of biomechanical data., Journal of Applied Biomechanics, Volume 15, Issue 3.
The camera viewport is the main area where the camera image is visible.
The image itself can be moved around by dragging with the mouse and resized using the manipulators at the corners of the image or by using CTRL + mouse scroll.
Drawings on the capture screen can go outside the image area.
If the image stays black, there might be a problem with the available USB bandwidth or power, or the exposure duration might be too short.
If nothing is visible at all, not even the black image rectangle, the camera did not connect correctly; for example, the camera might be in use in another application at the same time.
The infobar contains information about the connected camera and streaming performances.
The first part of the infobar displays the alias of the camera and the current configuration for image size, frame rate and image format.
Clicking in this area of the infobar will bring up the camera configuration dialog.
The frame rate indicated is the one configured, the actual frame rate sent by the camera might be different for various reasons like low light levels or hardware limitations.
The second part of the infobar displays the following live statistics:
This is the frequency at which Kinovea is receiving images from the camera. The value is in frames per second.
Many cameras will reduce their frame rate based on various external factors.
For example, when a camera uses auto-exposure and the exposure duration computed by the device is incompatible with the selected frame rate.
This is the amount of data that passes through Kinovea as it processes the stream. The value is in megabytes per second, with the convention of one megabyte as 1024 kilobytes.
This value is related to the the image size, frame rate and format, possible on-camera image compression, link bandwidth, and possible post-processing done at the driver level.
You can use this value to estimate the necessary speed for your storage medium to write the uncompressed stream.
In the case of a non-compressed stream in RGB24 format, the value is calculated as follows:
This value describes how much Kinovea is struggling to keep up with the camera framerate.
It is computed as the time taken to process one frame divided by the interval between frames.
When this value is near 100% it means it takes Kinovea the same amount of time to process one frame as the time budget it has for that frame, if it goes over 100% dropped frames may occur.
The toolbar contains drawing tools usable on the capture screen. Some tools available in the playback screen are not available in the capture screen.
Some buttons may give access to multiple tools. To access the other tools, right click the button or perform a long press on the button.
The style profile dialog is not currently accessible in the capture screen, in order to change the default style of a tool you need to open a playback screen and change it from there.
The capture controls area contains the following buttons:
Configure camera
Displays the camera configuration dialog to change options like image size or frame rate.
The available options depend on the specific camera brand and model.
Pause camera
Pauses or restarts the camera stream. This disconnects the camera.
When the camera is disconnected, it is possible to review the last few seconds of action seen by the camera by adjusting the delay.
Disarm capture trigger
Disarms or rearms the audio capture trigger. When the audio trigger is disarmed, audio levels will not be monitored and capture will not be automatically started.
The microphone and audio level threshold can be configured from Options ‣ Preferences ‣ Capture ‣ Automation.
Save image
Saves the image currently displayed to an image file based on the configured file name and saving directory.
The saving directory can be configured from Options ‣ Preferences ‣ Capture ‣ Image naming.
Start recording video
Starts or stops recording the video. The video is recorded based on the compression options, recording mode, and naming options found under Options ‣ Preferences ‣ Capture.
The delay controls let you adjust the amount of delay, in seconds, of the displayed camera stream with regards to the real time action.
The maximum amount of delay depends on the camera configuration — hardware compression, image format, image size, frame rate — and the memory allocated in the delay cache under Options ‣ Preferences ‣ Capture ‣ Memory.
These fields define the names of the next files that will be saved when exporting an image or capturing a video.
They are automatically updated after each recording but can also be modified manually.
The file names can use macros like the current date or the name of the camera.
The list of available macros and configuration options can be found under Options ‣ Preferences ‣ Capture ‣ Image naming and Options ‣ Preferences ‣ Capture ‣ Video naming.
Clicking on the folder buttons will open the main preferences dialog on the relevant page.
Kinovea supports devices with a DirectShow driver.
These devices include USB video class (UVC) webcams, Mini DV camcorders, and analog video converters.
The avalaible stream formats depend on the brand and model of the camera.
The typical stream formats include:
RGB, RGB24, RGB32: the images are not compressed.
YUV, YCbCr, YUY2, I420: the images are not compressed.
MJPEG: the images are compressed on the camera.
Using the MJPEG stream format can lower the bandwidth requirements and improve framerate.
Nota
Kinovea native storage format for compressed videos is MJPEG. When using this stream format, the videos are saved as-is without any extra decompression or compression steps.
This value is related to the amount of time the sensor is exposed.
Changing the exposure duration lets you find a tradeoff between motion blur and light requirements.
Lowering the exposure duration reduces motion blur and increases the amount of light required to capture the scene.
For most cameras brands, the unit of this value is not known and it is exposed as an arbitrary number.
For some camera brands the value is shown in milliseconds if a special property is exposed by the driver or the values where derived manually.
This value is a limiting factor for the framerate. If this value is too high the framerate is lowered automatically by the camera.
This is the amount of amplification of the signal captured on the sensor.
Increasing this value increases the apparent brightness but can introduce noise in the image.
In order to add a network or IP camera in Kinovea use the Manual connection button on the camera tab of the explorer panel.
This dialog brings the configuration options described below. The same options are available later by clicking on the Configure camera button in the capture screen.
The values for the options depend on the particular brand of network camera and your own network configuration. The most important setting is the Final URL, this is the URL used by Kinovea to connect to the camera stream.
Clicking the Test button will try to connect to the camera and report success or failure.
Smartphones can be used as network cameras over the WiFi network by using a dedicated application, provided it supports streaming in MJPEG.
For example «IP Webcam» by Pavel Khlebovich can be used on Android devices.
Consult the application to find the URL of the server and the IP address of the smartphone.
The machine vision cameras are supported via plugins that are distributed separately from Kinovea.
Each plugin must be installed under the application data folder, inside the Plugins\Camera sub-folder.
The runtime for the specific camera brand, provided by the manufacturer, must also be installed separately.
Consult the section for each brand below to check if any extra customization is needed during the installation of the vendor’s runtime to make it work with Kinovea.
This section describes the common options for the configuration of machine vision cameras.
Settings or installation information specific to each camera vendor are described after the section Resulting Framerate.
When the selected stream format is a raw Bayer format, this option defines which reconstruction method, if any, is applied to the raw sensor data. The following options are available:
Raw: No reconstruction is performed. Kinovea receives the images as-is.
Mono: Monochromatic images are rebuilt by the camera vendor runtime before passing the images to Kinovea.
Color: Color images are rebuilt by the camera vendor runtime before passing the images to Kinovea.
Nota
When using a raw stream format and the video is recorded without compression, the raw sensor data is saved to the video file.
It is then possible to rebuild the color at playback time by choosing the appropriate option under the menu Image ‣ Demosaicing.
This approach can be interesting to limit the bandwidth required to transfer the camera stream and save it to storage.
The target framerate. Whether this framerate is actually reached or not depends on the image format, size, exposure and the camera hardware.
If the framerate cannot be sustained, the Resulting framerate value will be displayed in red.
If the Auto checkbox is checked, the camera will ignore the value and always send the maximum framerate possible based on the rest of the configuration and the camera hardware.
If the Auto checkbox is not checked, the camera will use at most the configured value, if it is possible for the hardware to do so.
The manual configuration can be interesting if you want to use a specific framerate that is less than the maximum possible.
Nota
After changing the image size or stream format you must click on Reconnect for the maximum framerate information to be updated.
This is the amount of time the sensor is exposed, in microseconds.
Changing the exposure duration lets you find a tradeoff between motion blur and light requirements.
Lowering this value reduces motion blur but increase the amount of light required to capture the scene.
This value is a limiting factor for the framerate.
For example a value of 20 milliseconds implies that there cannot be more than 50 images per second captured.
This is the amount of amplification of the signal captured on the sensor.
Increasing this value increases the apparent brightness but can introduce noise in the image.
When installing Basler’s Pylon runtime software, it is necessary to use the Custom option in the installer, expand the pylon Runtime node, and select pylon C .NET Runtime option.
If you have already installed the software you can re-run the installer and choose Modify the current installation to access this option.
In order to use options that are not supported in Kinovea, use IDS” uEye Cockpit.
Modify the camera configuration in uEye Cockpit and do File ‣ Save parameters to file.
Then in Kinovea, use the Import parameters button on the camera configuration dialog and point to the file you just saved.
In order to unlink the configuration file with Kinovea, right click on the camera thumbnail in the main explorer view and use the menu Forget custom settings.
The camera simulator is a virtual camera in Kinovea.
This camera can be used to evaluate the expected performances of real hardware on a particular computer.
In order to add a camera simulator use the Manual connection button on the camera tab of the explorer panel.
In the manual connection dialog, use the Camera type drop down and select Camera simulator.
When using the JPEG format, a set of images is compressed in advance and images are rotated to the output.
This is to avoid any computer slow downs due to compression when using the simulator to evaluate if a particular camera configuration would be sustainable; as the JPEG compression would be done in the camera.
When using the RGB24 format, the current time and image sequence number is stamped on the image.
Truco
You can use the sequential numbers on the images in the recorded video to verify if any frames were dropped during the recording process.
The live delay function lets you delay the display of the live stream.
This function is very useful for self-coaching: set the delay to be approximately the total time of the exercise plus the time necessary to come back to the computer.
The camera and Kinovea can then be left unattended.
By the time you complete your exercise and come back to the computer, you should see a replay of the action.
The same approach can be used with a group of students or athletes, forming an uninterrupted queue of intertwined feedback loops.
To delay the display of the camera stream use the delay slider or the delay input box.
The delay amount is in seconds.
The maximum amount of delay depends on the camera configuration — hardware compression, image format, image size, frame rate — and the memory allocated in the delay cache under Options ‣ Preferences ‣ Capture ‣ Memory.
If you wish to use a delay that is larger than the maximum that can be set using the slider, increase the size of the memory buffer under Options ‣ Preferences ‣ Capture ‣ Memory.
Note however that this buffer is always allocated, even if you do not use the delay function.
Nota
When using two capture screens at the same time the memory buffer is shared between the screens.
It is possible to record video while delay is active. The option selected for the recording mode impacts whether the delay is taken into account for the recording or not.
Recording mode
Delay
Camera
No
Delayed
Yes
Retroactive
Yes
Combining delay and recording can be used to record actions happening before the moment the record button is hit or triggered.
When recording with delay, the time origin of the resulting video is set to the real moment the record button was hit or triggered.
For example we are filming a golf swing for a total duration of 2.5 seconds and a delay of 1.5 seconds.
The recording is started via audio trigger when the club hits the ball.
The first image of the video will correspond to what the camera was filming 1.5 seconds before the club hit the ball.
The time origin in the metadata file will be set to the club-ball impact. All of the action happening before the impact will be timestamped with negative numbers.
The path to save the recording and the file name of the output video are based on the options under Options ‣ Preferences ‣ Capture ‣ Video naming.
The duration of the recording is either set manually when using the stop recording button, or automatically when using the Stop recording by duration option in Options ‣ Preferences ‣ Capture ‣ Automation.
Recordings can be compressed or uncompressed based on the option set in Options ‣ Preferences ‣ Capture ‣ General.
Recordings can ignore or take into account the live delay based on the recording mode set in Options ‣ Preferences ‣ Capture ‣ Recording.
For high speed cameras the framerate set in the metadata of the output video can be customized by configuring a replacement framerate in Options ‣ Preferences ‣ Capture ‣ Recording.
When the recording process is not fast enough to sustain the camera framerate, images are skipped (dropped) and not added to the output video.
This can corrupt time measurements made on the output video, as this assumes stable frame rate.
In order to make the most out of the camera without any frame drops, it is important to identify the bottlenecks and configure the camera and Kinovea according to your requirements and the trade offs you are interested in.
Use the infobar to get feedback about performances.
The recording mode Retroactive should not yield any frame drops but the duration of the output videos is limited by the amount of memory allocated for the delay buffer.
Furthermore, this mode renders the camera unavailable for a small duration after the recording, while Kinovea performs the actual export.
The other two modes record on-the-fly and can record for arbitrary long periods (based on storage space) and without post-recording pause. However they require more configuration if the camera does not already produce compressed images.
For cameras that do not produce an already compressed stream, Kinovea may compresses the images on the fly on the CPU. This process is usually slow compared to the frame rate that the camera can sustain.
You can disable compression on recordings under Options ‣ Preferences ‣ Capture ‣ General.
When compression is disabled, the amount of data to store on file is between 5 to 10 times larger.
In this configuration, the speed of the storage medium might become the new bottleneck.
Solid state drives (SSD) are very much recommended instead of hard drives (HDD). NVMe SSD offer even better performances.
Ultimately it is also possible to configure a RAM Drive to further increase the storage speed.
It is possible to setup Kinovea to record and replay videos multiple times in a row without manual interaction.
To do this set the recordings to start from the audio trigger and stop from the recording duration preset.
Add a replay folder observer monitoring the capture folder, this will automatically open and play the last recorded video.
Replay folder observers are special playback screens that constantly monitor a specific folder for new videos.
When a new video is created in the monitored folder, the playback screen automatically loads it and starts playing it.
This can be used to automate the capture and replay the feedback loop.
The replay folder observer works at the file system level and is independent from the source or process creating the video.
The videos can be created by the same or another instance of Kinovea, or by an external process.
Apart from the monitoring mechanism, the video is loaded normally.
In particular the annotation file created by the capture screen is loaded and drawings created on the capture side are imported.
You can create a replay observer from the following places:
From the menu File ‣ Open replay folder observer…. This opens a folder selection dialog.
In the files and shortcuts tabs of the explorer panel, right click a folder or a file and choose Open as replay folder observer.
In the capture tab, right click a file in the capture history and choose Open as replay folder observer.
In the playback screen, right click the main video image and choose Open a replay folder observer…. This opens a folder selection dialog.
In the capture screen, in the recently captured files list, right click the thumbnail of a file and choose Open as replay folder observer.
The recently opened folder observers are listed under the menu File ‣ Recent.
Nota
The menus on files are opening the observer on the parent folder of that file.
This observer will immediately load the most recent file of the folder which is not necessarily the file used to start it.
When manually loading a video into a replay folder observer, the observer is not deactivated and will continue to monitor the folder.
In order to deactivate the monitoring and loading of new videos, you can turn the folder observer into a regular playback screen by clicking the observer icon in the infobar of the screen.
Conversely, in order to turn a normal playback screen into a replay folder observer, click the video icon in the infobar of the screen.
The annotations created on the capture screen are stored in a companion KVA file next to the recordings.
When the recorded video is opened in Kinovea the annotations are loaded and can be modified.
Because the capture screen is simultaneously monitoring the annotation file it last exported, any changes to the annotations on the playback side will be reflected into the capture screen.
The next recording from this capture screen will then use the updated annotations.
To export the current working zone as a new video use the Video export button,
the menu File ‣ Export video, or the context menu Export video.
This will open an export dialog with an option to take the slow motion into account.
If checked the resulting video will use the current speed slider to determine the output framerate.
The output video format can be selected in the file name selection dialog between MKV (Matroska), MP4, or AVI.
The video codec is always MJPEG.
Nota
Drawings added to the video are painted on the output images and will no longer be modifiable.
Comments added at the key image level and other non directly visible annotations are not saved in the output.
Truco
To minimize the loss of information only export the video when absolutely necessary and otherwise keep the original video and save the annotations in a separate KVA file.
To export a series of images from the video, use the Export sequence button in the playback screen.
This brings up a dialog to configure the frequency at which the images are taken from the video.
If the checkbox Export the key images is checked, the frequency slider is ignored and only the key images are exported.
When using two playback screens, use the Export video or Export image buttons in the joint controls to create a single output containing both input side by side.
The input videos will be combined frame by frame using the configured synchronization point.
To export measurements such as point positions, linear velocities, or angular accelerations, use the kinematics dialogs.
The kinematics dialogs are found under the Tools menu: Scatter diagram, Linear kinematics, Angular kinematics, Angle-angle diagram.
To export data use the export options at the bottom right of the dialogs.
This will export the data in CSV format, either to the clipboard or to a file.
The first column is the time, either in milliseconds or normalized, and the other columns are the data sources.
The measurements displayed and exported from the kinematics dialog use the filtered coordinates.
The filtering process is described in the About box of the Linear kinematics dialog.
You can control filtering from the preferences at Options ‣ Preferences ‣ Drawings ‣ General ‣ Enable coordinates filtering.
Another option to export data is to use the converters menus under File ‣ Export to spreadsheet.
The following options are available:
LibreOffice (.odf)
Microsoft Excel (.xml)
Web (.html)
Gnuplot (.txt)
The underlying mechanism for these menus is to convert the annotation data into the output format: it does not perform any higher level computations.
This approach has the following differences with the export from the kinematics dialogs:
The coordinates do not use filtering.
Only the coordinates are exported, not any higher level measurements like speed or acceleration.
Each object is exported independently in its own table.
Key images times are exported.
Stopwatches are exported.
The time column uses the configured timecode format and may not be numerical in nature.
To use Gnuplot to plot the trajectory data on a 3D graph with time as the third dimensions, you can use the following commands:
If you used Kinovea in your research we would very much appreciate it if you included it in your bibliography.
You can find examples of formatted citations in the About dialog under the Citation tab.
Nota
Kinovea is an open source project and is not published by a company,
thus there is no meaningful «city» or «country» of origin as is sometimes requested by journals for software references.
Kinovea does not store any personal data and does not transmit anything over the internet.
It is a standalone desktop application, similar to a media player with extra functions,
and does not require the Internet to function.
The only aspect of the program that makes use of the Internet is the menu «Check for updates…»,
which consults a file on the Kinovea web server and reports if a new version is available or not.
This question comes up when a scientific journal asks for bibliographic references.
Kinovea is not a company and does not have headquarters in any city.
There is no meaningful way to answer this question.
Please ask the journal about their dedicated format for citing Open Source software.
You may also check the built-in citations examples in the About dialog.
It is not the place of Kinovea or its authors to suggest one camera brand over another.
It is best for the project to stay neutral.
Besides, there are too many parameters specific to each use-case that it is not possible to answer this question with anything else than «it depends».
Please register on the forum and ask the question there, providing as many details and context as possible, this way other users can share their experience directly.
7. Why is the speed of my video slowing down by itself?
The speed slider is forced down when the original frame rate of the video cannot be sustained by the player.
To check if a new version has been published from within the program itself you can use the menu Help ‣ Check for updates.
This function requires an Internet connection.
The function simply consults a text file on the web server that contains the version number of the latest published version.
There is no automatic update mechanism, to update you must download and install the new version manually.
Name of this instance of Kinovea. Used in the window title and to select a preference file.
-hideExplorer
The explorer panel will not be visible. Default: false.
-workspace<path>
Path to a Kinovea workspace XML file. This overrides other video options.
To create a workspace file use the menu Option ‣ Workspace ‣ Export workspace.
-video<path>
Path to a video to load.
-speed<0..200>
Playback speed to play the video, as a percentage of its original framerate. Default: 100.
-stretch
The video will be expanded to fit the viewport. Default: false.
While Kinovea is running it is possible to send it commands from a third party application or script.
For example this can be used to trigger recording based on a software event.
The commands use the Windows messaging system and the WM_COPYDATA message.
The exact way to create these Windows messages depends on the programming language or platform used to write the third party application.
Each command has to be sent separately. There is no return value.
The general format of the messages is the following:
Kinovea:<Category>.<Command>
The list of categories and commands is available under Options ‣ Preferences ‣ Keyboard.
This mechanism works even if Kinovea is not in the foreground.
If there are multiple instances of Kinovea running you will need to send the message to the correct instance based on its name.