anki_vector¶
SDK for programming with the Anki Vector robot.
-
class
anki_vector.
Robot
(serial=None, ip=None, name=None, config=None, default_logging=True, behavior_activation_timeout=10, cache_animation_lists=True, enable_face_detection=False, estimate_facial_expression=False, enable_audio_feed=False, enable_custom_object_detection=False, enable_nav_map_feed=None, show_viewer=False, show_3d_viewer=False, behavior_control_level=<ControlPriorityLevel.DEFAULT_PRIORITY: 20>)¶ The Robot object is responsible for managing the state and connections to a Vector, and is typically the entry-point to running the sdk.
The majority of the robot will not work until it is properly connected to Vector. There are two ways to get connected:
1. Using
with
: it works just like opening a file, and will close when thewith
block’s indentation ends.import anki_vector # Create the robot connection with anki_vector.Robot() as robot: # Run your commands robot.anim.play_animation_trigger("GreetAfterLongTime")
2. Using
connect()
anddisconnect()
to explicitly open and close the connection: it allows the robot’s connection to continue in the context in which it started.import anki_vector # Create a Robot object robot = anki_vector.Robot() # Connect to the Robot robot.connect() # Run your commands robot.anim.play_animation_trigger("GreetAfterLongTime") # Disconnect from Vector robot.disconnect()
- Parameters
serial (
Optional
[str
]) – Vector’s serial number. The robot’s serial number (ex. 00e20100) is located on the underside of Vector, or accessible from Vector’s debug screen. Used to identify which Vector configuration to load.name (
Optional
[str
]) – Vector’s name (in format"Vector-XXXX"
) to be used for mDNS discovery. If a Vector with the given name is discovered, theip
parameter (and config field) will be overridden.config (
Optional
[dict
]) – A customdict
to override values in Vector’s configuration. (optional) Example:{"cert": "/path/to/file.cert", "name": "Vector-XXXX", "guid": "<secret_key>"}
wherecert
is the certificate to identify Vector,name
is the name on Vector’s face when his backpack is double-clicked on the charger, andguid
is the authorization token that identifies the SDK user. Note: Never share your authentication credentials with anyone.default_logging (
bool
) – Toggle default logging.behavior_activation_timeout (
int
) – The time to wait for control of the robot before failing.cache_animation_lists (
bool
) – Get the list of animation triggers and animations available at startup.enable_face_detection (
bool
) – Turn on face detection.estimate_facial_expression (
bool
) – Turn estimating facial expression on/off. Enablingestimate_facial_expression
returns a facial expression, the expression values and theanki_vector.util.ImageRect
for observed face regions (eyes, nose, and mouth) as part of theRobotObservedFace
event. It is turned off by default as the number ofRobotObservedFace
events are reduced due to the increased processing time.enable_audio_feed (
bool
) – Turn audio feed on/off.enable_custom_object_detection (
bool
) – Turn custom object detection on/off.enable_nav_map_feed (
Optional
[bool
]) – Turn navigation map feed on/off.show_viewer (
bool
) – Specifies whether to display a view of Vector’s camera in a window.show_3d_viewer (
bool
) – Specifies whether to display a 3D view of Vector’s understanding of the world in a window.behavior_control_level (
ControlPriorityLevel
) – Request control of Vector’s behavior system at a specific level of control. PassNone
if behavior control is not needed. SeeControlPriorityLevel
for more information.
-
property
accel
¶ The current accelerometer reading (x, y, z)
import anki_vector with anki_vector.Robot() as robot: current_accel = robot.accel
- Return type
- Type
-
property
anim
¶ A reference to the
AnimationComponent
instance.- Return type
-
property
audio
¶ The audio instance used to control Vector’s microphone feed and speaker playback.
- Return type
-
property
behavior
¶ A reference to the
BehaviorComponent
instance.- Return type
-
property
camera
¶ The
CameraComponent
instance used to control Vector’s camera feed.import anki_vector with anki_vector.Robot() as robot: robot.camera.init_camera_feed() image = robot.camera.latest_image image.raw_image.show()
- Return type
-
property
carrying_object_id
¶ The ID of the object currently being carried (-1 if none)
import anki_vector from anki_vector.util import degrees # Set the robot so that he can see a cube. with anki_vector.Robot() as robot: robot.behavior.set_head_angle(degrees(0.0)) robot.behavior.set_lift_height(0.0) robot.world.connect_cube() if robot.world.connected_light_cube: robot.behavior.pickup_object(robot.world.connected_light_cube) print("carrying_object_id: ", robot.carrying_object_id)
- Return type
-
property
conn
¶ A reference to the
Connection
instance.- Return type
-
connect
(timeout=10)¶ Start the connection to Vector.
import anki_vector robot = anki_vector.Robot() robot.connect() robot.anim.play_animation_trigger("GreetAfterLongTime") robot.disconnect()
- Parameters
timeout (
int
) – The time to allow for a connection before aanki_vector.exceptions.VectorTimeoutException
is raised.- Return type
None
-
disconnect
()¶ Close the connection with Vector.
import anki_vector robot = anki_vector.Robot() robot.connect() robot.anim.play_animation_trigger("GreetAfterLongTime") robot.disconnect()
- Return type
None
-
property
enable_audio_feed
¶ The audio feed enabled/disabled
- Getter
Returns whether the audio feed is enabled
- Setter
Enable/disable the audio feed
import asyncio import time import anki_vector with anki_vector.Robot(enable_audio_feed=True) as robot: time.sleep(5) robot.enable_audio_feed = False time.sleep(5)
- Return type
-
property
events
¶ A reference to the
EventHandler
instance.- Return type
-
property
faces
¶ A reference to the
FaceComponent
instance.- Return type
-
property
force_async
¶ A flag used to determine if this is a
Robot
orAsyncRobot
.- Return type
-
get_battery_state
()¶ Check the current state of the robot and cube batteries.
The robot is considered fully-charged above 4.1 volts. At 3.6V, the robot is approaching low charge.
Robot battery level values are as follows:
Value
Level
Description
1
Low
3.6V or less. If on charger, 4V or less.
2
Nominal
Normal operating levels.
3
Full
This state can only be achieved when Vector is on the charger
Cube battery level values are shown below:
Value
Level
Description
1
Low
1.1V or less.
2
Normal
Normal operating levels.
import anki_vector with anki_vector.Robot() as robot: print("Connecting to a cube...") robot.world.connect_cube() battery_state = robot.get_battery_state() if battery_state: print("Robot battery voltage: {0}".format(battery_state.battery_volts)) print("Robot battery Level: {0}".format(battery_state.battery_level)) print("Robot battery is charging: {0}".format(battery_state.is_charging)) print("Robot is on charger platform: {0}".format(battery_state.is_on_charger_platform)) print("Robot suggested charger time: {0}".format(battery_state.suggested_charger_sec)) print("Cube battery level: {0}".format(battery_state.cube_battery.level)) print("Cube battery voltage: {0}".format(battery_state.cube_battery.battery_volts)) print("Cube battery seconds since last reading: {0}".format(battery_state.cube_battery.time_since_last_reading_sec)) print("Cube battery factory id: {0}".format(battery_state.cube_battery.factory_id))
- Return type
BatteryStateResponse
-
get_feature_flag
(feature_name)¶ Get the status of the given feature flag of the robot.
This let you check if a specific feature is valid and enabled (sufficiently developed to be used).
import anki_vector with anki_vector.Robot(behavior_control_level=None) as robot: response = robot.get_feature_flag(feature_name='Exploring') if response: print(response)
- Return type
FeatureFlagResponse
-
get_feature_flag_list
()¶ Get a list of available feature flags the robot knows.
import anki_vector with anki_vector.Robot() as robot: response = robot.get_feature_flag_list() if response: for feature in response.list: print(feature)
- Return type
FeatureFlagListResponse
-
get_latest_attention_transfer
()¶ Get the reason why the latest attention transfer failed, if any
- Returns <AttentionTransfer> with the fields:
reason <AttentionTransferReason>
seconds_ago
import anki_vector with anki_vector.Robot() as robot: att_trans = robot.get_latest_attention_transfer() if att_trans: print("Last attention transfer failed because of: {0}".format(att_trans.reason))
- Return type
LatestAttentionTransferResponse
-
get_version_state
()¶ Get the versioning information for Vector, including Vector’s os_version and engine_build_id.
import anki_vector with anki_vector.Robot() as robot: version_state = robot.get_version_state() if version_state: print("Robot os_version: {0}".format(version_state.os_version)) print("Robot engine_build_id: {0}".format(version_state.engine_build_id))
- Return type
VersionStateResponse
-
property
gyro
¶ The current gyroscope reading (x, y, z)
import anki_vector with anki_vector.Robot() as robot: current_gyro = robot.gyro
- Return type
-
property
head_angle_rad
¶ Vector’s head angle (up/down).
import anki_vector with anki_vector.Robot() as robot: current_head_angle_rad = robot.head_angle_rad
- Return type
-
property
head_tracking_object_id
¶ The ID of the object the head is tracking to (-1 if none)
import anki_vector with anki_vector.Robot() as robot: current_head_tracking_object_id = robot.head_tracking_object_id
- Return type
-
property
last_image_time_stamp
¶ The robot’s timestamp for the last image seen.
import anki_vector with anki_vector.Robot() as robot: current_last_image_time_stamp = robot.last_image_time_stamp
- Return type
-
property
left_wheel_speed_mmps
¶ Vector’s left wheel speed in mm/sec
import anki_vector with anki_vector.Robot() as robot: current_left_wheel_speed_mmps = robot.left_wheel_speed_mmps
- Return type
-
property
lift_height_mm
¶ Height of Vector’s lift from the ground.
import anki_vector with anki_vector.Robot() as robot: current_lift_height_mm = robot.lift_height_mm
- Return type
-
property
localized_to_object_id
¶ The ID of the object that the robot is localized to (-1 if none)
import anki_vector with anki_vector.Robot() as robot: current_localized_to_object_id = robot.localized_to_object_id
- Return type
-
property
motors
¶ A reference to the
MotorComponent
instance.- Return type
A reference to the
NavMapComponent
instance.- Return type
-
property
photos
¶ A reference to the
PhotographComponent
instance.- Return type
-
property
pose
¶ The current pose (position and orientation) of Vector.
import anki_vector with anki_vector.Robot() as robot: current_robot_pose = robot.pose
- Return type
- Type
-
property
pose_angle_rad
¶ Vector’s pose angle (heading in X-Y plane).
import anki_vector with anki_vector.Robot() as robot: current_pose_angle_rad = robot.pose_angle_rad
- Return type
-
property
pose_pitch_rad
¶ Vector’s pose pitch (angle up/down).
import anki_vector with anki_vector.Robot() as robot: current_pose_pitch_rad = robot.pose_pitch_rad
- Return type
-
property
proximity
¶ ProximityComponent
containing state related to object proximity detection.import anki_vector with anki_vector.Robot() as robot: proximity_data = robot.proximity.last_sensor_reading if proximity_data is not None: print(proximity_data.distance)
- Return type
-
property
right_wheel_speed_mmps
¶ Vector’s right wheel speed in mm/sec
import anki_vector with anki_vector.Robot() as robot: current_right_wheel_speed_mmps = robot.right_wheel_speed_mmps
- Return type
-
property
screen
¶ A reference to the
ScreenComponent
instance.- Return type
-
property
status
¶ A property that exposes various status properties of the robot.
This status provides a simple mechanism to, for example, detect if any of Vector’s motors are moving, determine if Vector is being held, or if he is on the charger. The full list is available in the
RobotStatus
class documentation.import anki_vector with anki_vector.Robot() as robot: if robot.status.is_being_held: print("Vector is being held!") else: print("Vector is not being held.")
- Return type
-
property
touch
¶ TouchComponent
containing state related to object touch detection.import anki_vector with anki_vector.Robot() as robot: print('Robot is being touched: {0}'.format(robot.touch.last_sensor_reading.is_being_touched))
- Return type
-
property
viewer
¶ The
ViewerComponent
instance used to render Vector’s camera feed.import time import anki_vector with anki_vector.Robot() as robot: # Render video for 5 seconds robot.viewer.show() time.sleep(5) # Disable video render and camera feed for 5 seconds robot.viewer.close()
- Return type
-
property
viewer_3d
¶ The
Viewer3DComponent
instance used to render Vector’s navigation map.import time import anki_vector with anki_vector.Robot(show_3d_viewer=True, enable_nav_map_feed=True) as robot: # Render 3D view of navigation map for 5 seconds time.sleep(5)
- Return type
-
property
vision
¶ VisionComponent
containing functionality related to vision based object detection.import anki_vector with anki_vector.Robot() as robot: robot.vision.enable_custom_object_detection()
- Return type
-
class
anki_vector.
AsyncRobot
(serial=None, ip=None, name=None, config=None, default_logging=True, behavior_activation_timeout=10, cache_animation_lists=True, enable_face_detection=False, estimate_facial_expression=False, enable_audio_feed=False, enable_custom_object_detection=False, enable_nav_map_feed=None, show_viewer=False, show_3d_viewer=False, behavior_control_level=<ControlPriorityLevel.DEFAULT_PRIORITY: 20>)¶ The AsyncRobot object is just like the Robot object, but allows multiple commands to be executed at the same time. To achieve this, all grpc function calls also return a
concurrent.futures.Future
.1. Using
with
: it works just like opening a file, and will close when thewith
block’s indentation ends.import anki_vector from anki_vector.util import degrees # Create the robot connection with anki_vector.AsyncRobot() as robot: # Start saying text asynchronously say_future = robot.behavior.say_text("Now is the time") # Turn robot, wait for completion turn_future = robot.behavior.turn_in_place(degrees(3*360)) turn_future.result() # Play greet animation trigger, wait for completion greet_future = robot.anim.play_animation_trigger("GreetAfterLongTime") greet_future.result() # Make sure text has been spoken say_future.result()
2. Using
connect()
anddisconnect()
to explicitly open and close the connection: it allows the robot’s connection to continue in the context in which it started.import anki_vector from anki_vector.util import degrees # Create a Robot object robot = anki_vector.AsyncRobot() # Connect to Vector robot.connect() # Start saying text asynchronously say_future = robot.behavior.say_text("Now is the time") # Turn robot, wait for completion turn_future = robot.behavior.turn_in_place(degrees(3 * 360)) turn_future.result() # Play greet animation trigger, wait for completion greet_future = robot.anim.play_animation_trigger("GreetAfterLongTime") greet_future.result() # Make sure text has been spoken say_future.result() # Disconnect from Vector robot.disconnect()
When getting callbacks from the event stream, it’s important to understand that function calls return a
concurrent.futures.Future
and not anasyncio.Future
. This means any async callback functions will need to useasyncio.wrap_future()
to be able to await the function’s response.import asyncio import time import anki_vector async def callback(robot, event_type, event): await asyncio.wrap_future(robot.anim.play_animation_trigger('GreetAfterLongTime')) await asyncio.wrap_future(robot.behavior.set_head_angle(anki_vector.util.degrees(40))) if __name__ == "__main__": args = anki_vector.util.parse_command_args() with anki_vector.AsyncRobot(serial=args.serial, enable_face_detection=True) as robot: robot.behavior.set_head_angle(anki_vector.util.degrees(40)) robot.events.subscribe(callback, anki_vector.events.Events.robot_observed_face) # Waits 10 seconds. Show Vector your face. time.sleep(10)
- Parameters
serial (
Optional
[str
]) – Vector’s serial number. The robot’s serial number (ex. 00e20100) is located on the underside of Vector, or accessible from Vector’s debug screen. Used to identify which Vector configuration to load.config (
Optional
[dict
]) – A customdict
to override values in Vector’s configuration. (optional) Example:{"cert": "/path/to/file.cert", "name": "Vector-XXXX", "guid": "<secret_key>"}
wherecert
is the certificate to identify Vector,name
is the name on Vector’s face when his backpack is double-clicked on the charger, andguid
is the authorization token that identifies the SDK user. Note: Never share your authentication credentials with anyone.default_logging (
bool
) – Toggle default logging.behavior_activation_timeout (
int
) – The time to wait for control of the robot before failing.cache_animation_lists (
bool
) – Get the list of animation triggers and animations available at startup.enable_face_detection (
bool
) – Turn on face detection.estimate_facial_expression (
bool
) – Turn estimating facial expression on/off.enable_audio_feed (
bool
) – Turn audio feed on/off.enable_custom_object_detection (
bool
) – Turn custom object detection on/off.enable_nav_map_feed (
Optional
[bool
]) – Turn navigation map feed on/off.show_viewer (
bool
) – Specifies whether to display a view of Vector’s camera in a window.show_3d_viewer (
bool
) – Specifies whether to display a 3D view of Vector’s understanding of the world in a window.behavior_control_level (
ControlPriorityLevel
) – Request control of Vector’s behavior system at a specific level of control. PassNone
if behavior control is not needed. SeeControlPriorityLevel
for more information.