The InteractiveObject class is the abstract base class for all display objects with which the user can interact, using the mouse, keyboard, or other user input device.

You cannot instantiate the InteractiveObject class directly. A call to the new InteractiveObject() constructor throws an ArgumentError exception.

The InteractiveObject class itself does not include any APIs for rendering content onscreen. To create a custom subclass of the InteractiveObject class, extend one of the subclasses that do have APIs for rendering content onscreen, such as the Sprite, SimpleButton, TextField, or MovieClip classes.

@event clear Dispatched when the user selects 'Clear'(or

						  'Delete') from the text context menu. This
						  event is dispatched to the object that
						  currently has focus. If the object that
						  currently has focus is a TextField, the
						  default behavior of this event is to cause
						  any currently selected text in the text field
						  to be deleted.

@event click Dispatched when a user presses and releases

						  the main button of the user's pointing device
						  over the same InteractiveObject. For a click
						  event to occur, it must always follow this
						  series of events in the order of occurrence:
						  mouseDown event, then mouseUp. The target
						  object must be identical for both of these
						  events; otherwise the `click`
						  event does not occur. Any number of other
						  mouse events can occur at any time between
						  the `mouseDown` or
						  `mouseUp` events; the
						  `click` event still occurs.

@event contextMenu Dispatched when a user gesture triggers the

						  context menu associated with this interactive
						  object in an AIR application.

@event copy Dispatched when the user activates the

						  platform-specific accelerator key combination
						  for a copy operation or selects 'Copy' from
						  the text context menu. This event is
						  dispatched to the object that currently has
						  focus. If the object that currently has focus
						  is a TextField, the default behavior of this
						  event is to cause any currently selected text
						  in the text field to be copied to the
						  clipboard.

@event cut Dispatched when the user activates the

						  platform-specific accelerator key combination
						  for a cut operation or selects 'Cut' from the
						  text context menu. This event is dispatched
						  to the object that currently has focus. If
						  the object that currently has focus is a
						  TextField, the default behavior of this event
						  is to cause any currently selected text in
						  the text field to be cut to the clipboard.

@event doubleClick Dispatched when a user presses and releases

						  the main button of a pointing device twice in
						  rapid succession over the same
						  InteractiveObject when that object's
						  `doubleClickEnabled` flag is set
						  to `true`. For a
						  `doubleClick` event to occur, it
						  must immediately follow the following series
						  of events: `mouseDown`,
						  `mouseUp`, `click`,
						  `mouseDown`, `mouseUp`.
						  All of these events must share the same
						  target as the `doubleClick` event.
						  The second click, represented by the second
						  `mouseDown` and
						  `mouseUp` events, must occur
						  within a specific period of time after the
						  `click` event. The allowable
						  length of this period varies by operating
						  system and can often be configured by the
						  user. If the target is a selectable text
						  field, the word under the pointer is selected
						  as the default behavior. If the target
						  InteractiveObject does not have its
						  `doubleClickEnabled` flag set to
						  `true` it receives two
						  `click` events.

						  The `doubleClickEnabled`
						  property defaults to `false`.

						  The double-click text selection behavior
						  of a TextField object is not related to the
						  `doubleClick` event. Use
						  `TextField.doubleClickEnabled` to
						  control TextField selections.

@event focusIn Dispatched after a display object

						  gains focus. This situation happens when a
						  user highlights the object with a pointing
						  device or keyboard navigation. The recipient
						  of such focus is called the target object of
						  this event, while the corresponding
						  InteractiveObject instance that lost focus
						  because of this change is called the related
						  object. A reference to the related object is
						  stored in the receiving object's
						  `relatedObject` property. The
						  `shiftKey` property is not used.
						  This event follows the dispatch of the
						  previous object's `focusOut`
						  event.

@event focusOut Dispatched after a display object

						  loses focus. This happens when a user
						  highlights a different object with a pointing
						  device or keyboard navigation. The object
						  that loses focus is called the target object
						  of this event, while the corresponding
						  InteractiveObject instance that receives
						  focus is called the related object. A
						  reference to the related object is stored in
						  the target object's
						  `relatedObject` property. The
						  `shiftKey` property is not used.
						  This event precedes the dispatch of the
						  `focusIn` event by the related
						  object.

@event gesturePan Dispatched when the user moves a point of

						  contact over the InteractiveObject instance
						  on a touch-enabled device(such as moving a
						  finger from left to right over a display
						  object on a mobile phone or tablet with a
						  touch screen). Some devices might also
						  interpret this contact as a
						  `mouseOver` event and as a
						  `touchOver` event.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, the
						  InteractiveObject instance can dispatch a
						  `mouseOver` event or a
						  `touchOver` event or a
						  `gesturePan` event, or all if the
						  current environment supports it. Choose how
						  you want to handle the user interaction. Use
						  the openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseOver` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `gesturePan` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event gesturePressAndTap Dispatched when the user creates a point of

						  contact with an InteractiveObject instance,
						  then taps on a touch-enabled device(such as
						  placing several fingers over a display object
						  to open a menu and then taps one finger to
						  select a menu item on a mobile phone or
						  tablet with a touch screen). Some devices
						  might also interpret this contact as a
						  combination of several mouse events, as well.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, and then provides
						  a secondary tap, the InteractiveObject
						  instance can dispatch a
						  `mouseOver` event and a
						  `click` event(among others) as
						  well as the `gesturePressAndTap`
						  event, or all if the current environment
						  supports it. Choose how you want to handle
						  the user interaction. Use the
						  openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseOver` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `gesturePressAndTap` event, you
						  can design your event handler to respond to
						  the specific needs of a touch-enabled
						  environment and provide users with a richer
						  touch-enabled experience. You can also handle
						  both events, separately, to provide a
						  different response for a touch event than a
						  mouse event.

						  When handling the properties of the event
						  object, note that the `localX` and
						  `localY` properties are set to the
						  primary point of contact(the "push"). The
						  `offsetX` and `offsetY`
						  properties are the distance to the secondary
						  point of contact(the "tap").

@event gestureRotate Dispatched when the user performs a rotation

						  gesture at a point of contact with an
						  InteractiveObject instance(such as touching
						  two fingers and rotating them over a display
						  object on a mobile phone or tablet with a
						  touch screen). Two-finger rotation is a
						  common rotation gesture, but each device and
						  operating system can have its own
						  requirements to indicate rotation. Some
						  devices might also interpret this contact as
						  a combination of several mouse events, as
						  well.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, the
						  InteractiveObject instance can dispatch a
						  `mouseOver` event and a
						  `click` event(among others), in
						  addition to the `gestureRotate`
						  event, or all if the current environment
						  supports it. Choose how you want to handle
						  the user interaction. Use the
						  openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseOver` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `gestureRotate` event, you can
						  design your event handler to respond to the
						  specific needs of a touch-enabled environment
						  and provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  When handling the properties of the event
						  object, note that the `localX` and
						  `localY` properties are set to the
						  primary point of contact. The
						  `offsetX` and `offsetY`
						  properties are the distance to the point of
						  contact where the rotation gesture is
						  complete.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event gestureSwipe Dispatched when the user performs a swipe

						  gesture at a point of contact with an
						  InteractiveObject instance(such as touching
						  three fingers to a screen and then moving
						  them in parallel over a display object on a
						  mobile phone or tablet with a touch screen).
						  Moving several fingers in parallel is a
						  common swipe gesture, but each device and
						  operating system can have its own
						  requirements for a swipe. Some devices might
						  also interpret this contact as a combination
						  of several mouse events, as well.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, and then moves the
						  fingers together, the InteractiveObject
						  instance can dispatch a `rollOver`
						  event and a `rollOut` event(among
						  others), in addition to the
						  `gestureSwipe` event, or all if
						  the current environment supports it. Choose
						  how you want to handle the user interaction.
						  If you choose to handle the
						  `rollOver` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `gestureSwipe` event, you can
						  design your event handler to respond to the
						  specific needs of a touch-enabled environment
						  and provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  When handling the properties of the event
						  object, note that the `localX` and
						  `localY` properties are set to the
						  primary point of contact. The
						  `offsetX` and `offsetY`
						  properties are the distance to the point of
						  contact where the swipe gesture is
						  complete.

						  **Note:** While some devices using the
						  Mac OS operating system can interpret a
						  four-finger swipe, this API only supports a
						  three-finger swipe.

@event gestureTwoFingerTap Dispatched when the user presses two points

						  of contact over the same InteractiveObject
						  instance on a touch-enabled device(such as
						  presses and releases two fingers over a
						  display object on a mobile phone or tablet
						  with a touch screen). Some devices might also
						  interpret this contact as a
						  `doubleClick` event.

						  Specifically, if a user taps two fingers
						  over an InteractiveObject, the
						  InteractiveObject instance can dispatch a
						  `doubleClick` event or a
						  `gestureTwoFingerTap` event, or
						  both if the current environment supports it.
						  Choose how you want to handle the user
						  interaction. Use the openfl.ui.Multitouch
						  class to manage touch event handling(enable
						  touch gesture event handling, simple touch
						  point event handling, or disable touch events
						  so only mouse events are dispatched). If you
						  choose to handle the `doubleClick`
						  event, then the same event handler will run
						  on a touch-enabled device and a mouse enabled
						  device. However, if you choose to handle the
						  `gestureTwoFingerTap` event, you
						  can design your event handler to respond to
						  the specific needs of a touch-enabled
						  environment and provide users with a richer
						  touch-enabled experience. You can also handle
						  both events, separately, to provide a
						  different response for a touch event than a
						  mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event gestureZoom Dispatched when the user performs a zoom

						  gesture at a point of contact with an
						  InteractiveObject instance(such as touching
						  two fingers to a screen and then quickly
						  spreading the fingers apart over a display
						  object on a mobile phone or tablet with a
						  touch screen). Moving fingers apart is a
						  common zoom gesture, but each device and
						  operating system can have its own
						  requirements to indicate zoom. Some devices
						  might also interpret this contact as a
						  combination of several mouse events, as well.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, and then moves the
						  fingers apart, the InteractiveObject instance
						  can dispatch a `mouseOver` event
						  and a `click` event(among
						  others), in addition to the
						  `gestureZoom` event, or all if the
						  current environment supports it. Choose how
						  you want to handle the user interaction. Use
						  the openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseOver` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `gestureZoom` event, you can
						  design your event handler to respond to the
						  specific needs of a touch-enabled environment
						  and provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  When handling the properties of the event
						  object, note that the `localX` and
						  `localY` properties are set to the
						  primary point of contact. The
						  `offsetX` and `offsetY`
						  properties are the distance to the point of
						  contact where the zoom gesture is
						  complete.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event imeStartComposition This event is dispatched to any client app

						  that supports inline input with an IME

@event keyDown Dispatched when the user presses a key.

						  Mappings between keys and specific characters
						  vary by device and operating system. This
						  event type is generated after such a mapping
						  occurs but before the processing of an input
						  method editor(IME). IMEs are used to enter
						  characters, such as Chinese ideographs, that
						  the standard QWERTY keyboard is ill-equipped
						  to produce. This event occurs before the
						  `keyUp` event.

						  In AIR, canceling this event prevents the
						  character from being entered into a text
						  field.

@event keyFocusChange Dispatched when the user attempts to change

						  focus by using keyboard navigation. The
						  default behavior of this event is to change
						  the focus and dispatch the corresponding
						  `focusIn` and
						  `focusOut` events.

						  This event is dispatched to the object
						  that currently has focus. The related object
						  for this event is the InteractiveObject
						  instance that receives focus if you do not
						  prevent the default behavior. You can prevent
						  the change in focus by calling the
						  `preventDefault()` method in an
						  event listener that is properly registered
						  with the target object. Focus changes and
						  `focusIn` and
						  `focusOut` events are dispatched
						  by default.

@event keyUp Dispatched when the user releases a key.

						  Mappings between keys and specific characters
						  vary by device and operating system. This
						  event type is generated after such a mapping
						  occurs but before the processing of an input
						  method editor(IME). IMEs are used to enter
						  characters, such as Chinese ideographs, that
						  the standard QWERTY keyboard is ill-equipped
						  to produce. This event occurs after a
						  `keyDown` event and has the
						  following characteristics:

@event middleClick Dispatched when a user presses and releases

						  the middle button of the user's pointing
						  device over the same InteractiveObject. For a
						  `middleClick` event to occur, it
						  must always follow this series of events in
						  the order of occurrence:
						  `middleMouseDown` event, then
						  `middleMouseUp`. The target object
						  must be identical for both of these events;
						  otherwise the `middleClick` event
						  does not occur. Any number of other mouse
						  events can occur at any time between the
						  `middleMouseDown` or
						  `middleMouseUp` events; the
						  `middleClick` event still occurs.

@event middleMouseDown Dispatched when a user presses the middle

						  pointing device button over an
						  InteractiveObject instance.

@event middleMouseUp Dispatched when a user releases the pointing

						  device button over an InteractiveObject
						  instance.

@event mouseDown Dispatched when a user presses the pointing

						  device button over an InteractiveObject
						  instance. If the target is a SimpleButton
						  instance, the SimpleButton instance displays
						  the `downState` display object as
						  the default behavior. If the target is a
						  selectable text field, the text field begins
						  selection as the default behavior.

@event mouseFocusChange Dispatched when the user attempts to change

						  focus by using a pointer device. The default
						  behavior of this event is to change the focus
						  and dispatch the corresponding
						  `focusIn` and
						  `focusOut` events.

						  This event is dispatched to the object
						  that currently has focus. The related object
						  for this event is the InteractiveObject
						  instance that receives focus if you do not
						  prevent the default behavior. You can prevent
						  the change in focus by calling
						  `preventDefault()` in an event
						  listener that is properly registered with the
						  target object. The `shiftKey`
						  property is not used. Focus changes and
						  `focusIn` and
						  `focusOut` events are dispatched
						  by default.

@event mouseMove Dispatched when a user moves the pointing

						  device while it is over an InteractiveObject.
						  If the target is a text field that the user
						  is selecting, the selection is updated as the
						  default behavior.

@event mouseOut Dispatched when the user moves a pointing

						  device away from an InteractiveObject
						  instance. The event target is the object
						  previously under the pointing device. The
						  `relatedObject` is the object the
						  pointing device has moved to. If the target
						  is a SimpleButton instance, the button
						  displays the `upState` display
						  object as the default behavior.

						  The `mouseOut` event is
						  dispatched each time the mouse leaves the
						  area of any child object of the display
						  object container, even if the mouse remains
						  over another child object of the display
						  object container. This is different behavior
						  than the purpose of the `rollOut`
						  event, which is to simplify the coding of
						  rollover behaviors for display object
						  containers with children. When the mouse
						  leaves the area of a display object or the
						  area of any of its children to go to an
						  object that is not one of its children, the
						  display object dispatches the
						  `rollOut` event.The
						  `rollOut` events are dispatched
						  consecutively up the parent chain of the
						  object, starting with the object and ending
						  with the highest parent that is neither the
						  root nor an ancestor of the
						  `relatedObject`.

@event mouseOver Dispatched when the user moves a pointing

						  device over an InteractiveObject instance.
						  The `relatedObject` is the object
						  that was previously under the pointing
						  device. If the target is a SimpleButton
						  instance, the object displays the
						  `overState` or
						  `upState` display object,
						  depending on whether the mouse button is
						  down, as the default behavior.

						  The `mouseOver` event is
						  dispatched each time the mouse enters the
						  area of any child object of the display
						  object container, even if the mouse was
						  already over another child object of the
						  display object container. This is different
						  behavior than the purpose of the
						  `rollOver` event, which is to
						  simplify the coding of rollout behaviors for
						  display object containers with children. When
						  the mouse enters the area of a display object
						  or the area of any of its children from an
						  object that is not one of its children, the
						  display object dispatches the
						  `rollOver` event. The
						  `rollOver` events are dispatched
						  consecutively down the parent chain of the
						  object, starting with the highest parent that
						  is neither the root nor an ancestor of the
						  `relatedObject` and ending with
						  the object.

@event mouseUp Dispatched when a user releases the pointing

						  device button over an InteractiveObject
						  instance. If the target is a SimpleButton
						  instance, the object displays the
						  `upState` display object. If the
						  target is a selectable text field, the text
						  field ends selection as the default behavior.

@event mouseWheel Dispatched when a mouse wheel is spun over an

						  InteractiveObject instance. If the target is
						  a text field, the text scrolls as the default
						  behavior. Only available on Microsoft Windows
						  operating systems.

@event nativeDragComplete Dispatched by the drag initiator

						  InteractiveObject when the user releases the
						  drag gesture.

						  The event's dropAction property indicates
						  the action set by the drag target object; a
						  value of "none"
						 (`DragActions.NONE`) indicates
						  that the drop was canceled or was not
						  accepted.

						  The `nativeDragComplete` event
						  handler is a convenient place to update the
						  state of the initiating display object, for
						  example, by removing an item from a list(on
						  a drag action of "move"), or by changing the
						  visual properties.

@event nativeDragDrop Dispatched by the target InteractiveObject

						  when a dragged object is dropped on it and
						  the drop has been accepted with a call to
						  DragManager.acceptDragDrop().

						  Access the dropped data using the event
						  object `clipboard` property.

						  The handler for this event should set the
						  `DragManager.dropAction` property
						  to provide feedback to the initiator object
						  about which drag action was taken. If no
						  value is set, the DragManager will select a
						  default value from the list of allowed
						  actions.

@event nativeDragEnter Dispatched by an InteractiveObject when a

						  drag gesture enters its boundary.

						  Handle either the
						  `nativeDragEnter` or
						  `nativeDragOver` events to allow
						  the display object to become the drop
						  target.

						  To determine whether the dispatching
						  display object can accept the drop, check the
						  suitability of the data in
						  `clipboard` property of the event
						  object, and the allowed drag actions in the
						  `allowedActions` property.

@event nativeDragExit Dispatched by an InteractiveObject when a

						  drag gesture leaves its boundary.

@event nativeDragOver Dispatched by an InteractiveObject

						  continually while a drag gesture remains
						  within its boundary.

						  `nativeDragOver` events are
						  dispatched whenever the mouse is moved. On
						  Windows and Mac, they are also dispatched on
						  a short timer interval even when the mouse
						  has not moved.

						  Handle either the
						  `nativeDragOver` or
						  `nativeDragEnter` events to allow
						  the display object to become the drop
						  target.

						  To determine whether the dispatching
						  display object can accept the drop, check the
						  suitability of the data in
						  `clipboard` property of the event
						  object, and the allowed drag actions in the
						  `allowedActions` property.

@event nativeDragStart Dispatched at the beginning of a drag

						  operation by the InteractiveObject that is
						  specified as the drag initiator in the
						  DragManager.doDrag() call.

@event nativeDragUpdate Dispatched during a drag operation by the

						  InteractiveObject that is specified as the
						  drag initiator in the DragManager.doDrag()
						  call.

						  `nativeDragUpdate` events are
						  not dispatched on Linux.

@event paste Dispatched when the user activates the

						  platform-specific accelerator key combination
						  for a paste operation or selects 'Paste' from
						  the text context menu. This event is
						  dispatched to the object that currently has
						  focus. If the object that currently has focus
						  is a TextField, the default behavior of this
						  event is to cause the contents of the
						  clipboard to be pasted into the text field at
						  the current insertion point replacing any
						  currently selected text in the text field.

@event rightClick Dispatched when a user presses and releases

						  the right button of the user's pointing
						  device over the same InteractiveObject. For a
						  `rightClick` event to occur, it
						  must always follow this series of events in
						  the order of occurrence:
						  `rightMouseDown` event, then
						  `rightMouseUp`. The target object
						  must be identical for both of these events;
						  otherwise the `rightClick` event
						  does not occur. Any number of other mouse
						  events can occur at any time between the
						  `rightMouseDown` or
						  `rightMouseUp` events; the
						  `rightClick` event still occurs.

@event rightMouseDown Dispatched when a user presses the pointing

						  device button over an InteractiveObject
						  instance.

@event rightMouseUp Dispatched when a user releases the pointing

						  device button over an InteractiveObject
						  instance.

@event rollOut Dispatched when the user moves a pointing

						  device away from an InteractiveObject
						  instance. The event target is the object
						  previously under the pointing device or a
						  parent of that object. The
						  `relatedObject` is the object that
						  the pointing device has moved to. The
						  `rollOut` events are dispatched
						  consecutively up the parent chain of the
						  object, starting with the object and ending
						  with the highest parent that is neither the
						  root nor an ancestor of the
						  `relatedObject`.

						  The purpose of the `rollOut`
						  event is to simplify the coding of rollover
						  behaviors for display object containers with
						  children. When the mouse leaves the area of a
						  display object or the area of any of its
						  children to go to an object that is not one
						  of its children, the display object
						  dispatches the `rollOut` event.
						  This is different behavior than that of the
						  `mouseOut` event, which is
						  dispatched each time the mouse leaves the
						  area of any child object of the display
						  object container, even if the mouse remains
						  over another child object of the display
						  object container.

@event rollOver Dispatched when the user moves a pointing

						  device over an InteractiveObject instance.
						  The event target is the object under the
						  pointing device or a parent of that object.
						  The `relatedObject` is the object
						  that was previously under the pointing
						  device. The `rollOver` events are
						  dispatched consecutively down the parent
						  chain of the object, starting with the
						  highest parent that is neither the root nor
						  an ancestor of the `relatedObject`
						  and ending with the object.

						  The purpose of the `rollOver`
						  event is to simplify the coding of rollout
						  behaviors for display object containers with
						  children. When the mouse enters the area of a
						  display object or the area of any of its
						  children from an object that is not one of
						  its children, the display object dispatches
						  the `rollOver` event. This is
						  different behavior than that of the
						  `mouseOver` event, which is
						  dispatched each time the mouse enters the
						  area of any child object of the display
						  object container, even if the mouse was
						  already over another child object of the
						  display object container.

@event selectAll Dispatched when the user activates the

						  platform-specific accelerator key combination
						  for a select all operation or selects 'Select
						  All' from the text context menu. This event
						  is dispatched to the object that currently
						  has focus. If the object that currently has
						  focus is a TextField, the default behavior of
						  this event is to cause all the contents of
						  the text field to be selected.

@event softKeyboardActivate Dispatched immediately after the soft

						  keyboard is raised.

@event softKeyboardActivating Dispatched immediately before the soft

						  keyboard is raised.

@event softKeyboardDeactivate Dispatched immediately after the soft

						  keyboard is lowered.

@event tabChildrenChange Dispatched when the value of the object's

						  `tabChildren` flag changes.

@event tabEnabledChange Dispatched when the object's

						  `tabEnabled` flag changes.

@event tabIndexChange Dispatched when the value of the object's

						  `tabIndex` property changes.

@event textInput Dispatched when a user enters one or more

						  characters of text. Various text input
						  methods can generate this event, including
						  standard keyboards, input method editors
						 (IMEs), voice or speech recognition systems,
						  and even the act of pasting plain text with
						  no formatting or style information.

@event touchBegin Dispatched when the user first contacts a

						  touch-enabled device(such as touches a
						  finger to a mobile phone or tablet with a
						  touch screen). Some devices might also
						  interpret this contact as a
						  `mouseDown` event.

						  Specifically, if a user touches a finger
						  to a touch screen, the InteractiveObject
						  instance can dispatch a
						  `mouseDown` event or a
						  `touchBegin` event, or both if the
						  current environment supports it. Choose how
						  you want to handle the user interaction. Use
						  the openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseDown` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `touchBegin` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchEnd Dispatched when the user removes contact with

						  a touch-enabled device(such as lifts a
						  finger off a mobile phone or tablet with a
						  touch screen). Some devices might also
						  interpret this contact as a
						  `mouseUp` event.

						  Specifically, if a user lifts a finger
						  from a touch screen, the InteractiveObject
						  instance can dispatch a `mouseUp`
						  event or a `touchEnd` event, or
						  both if the current environment supports it.
						  Choose how you want to handle the user
						  interaction. Use the openfl.ui.Multitouch
						  class to manage touch event handling(enable
						  touch gesture event handling, simple touch
						  point event handling, or disable touch events
						  so only mouse events are dispatched). If you
						  choose to handle the `mouseUp`
						  event, then the same event handler will run
						  on a touch-enabled device and a mouse enabled
						  device. However, if you choose to handle the
						  `touchEnd` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchMove Dispatched when the user moves the point of

						  contact with a touch-enabled device(such as
						  drags a finger across a mobile phone or
						  tablet with a touch screen). Some devices
						  might also interpret this contact as a
						  `mouseMove` event.

						  Specifically, if a user moves a finger
						  across a touch screen, the InteractiveObject
						  instance can dispatch a
						  `mouseMove` event or a
						  `touchMove` event, or both if the
						  current environment supports it. Choose how
						  you want to handle the user interaction. Use
						  the openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseMove` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `touchMove` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchOut Dispatched when the user moves the point of

						  contact away from InteractiveObject instance
						  on a touch-enabled device(such as drags a
						  finger from one display object to another on
						  a mobile phone or tablet with a touch
						  screen). Some devices might also interpret
						  this contact as a `mouseOut`
						  event.

						  Specifically, if a user moves a finger
						  across a touch screen, the InteractiveObject
						  instance can dispatch a `mouseOut`
						  event or a `touchOut` event, or
						  both if the current environment supports it.
						  Choose how you want to handle the user
						  interaction. Use the openfl.ui.Multitouch
						  class to manage touch event handling(enable
						  touch gesture event handling, simple touch
						  point event handling, or disable touch events
						  so only mouse events are dispatched). If you
						  choose to handle the `mouseOut`
						  event, then the same event handler will run
						  on a touch-enabled device and a mouse enabled
						  device. However, if you choose to handle the
						  `touchOut` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchOver Dispatched when the user moves the point of

						  contact over an InteractiveObject instance on
						  a touch-enabled device(such as drags a
						  finger from a point outside a display object
						  to a point over a display object on a mobile
						  phone or tablet with a touch screen). Some
						  devices might also interpret this contact as
						  a `mouseOver` event.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, the
						  InteractiveObject instance can dispatch a
						  `mouseOver` event or a
						  `touchOver` event, or both if the
						  current environment supports it. Choose how
						  you want to handle the user interaction. Use
						  the openfl.ui.Multitouch class to manage touch
						  event handling(enable touch gesture event
						  handling, simple touch point event handling,
						  or disable touch events so only mouse events
						  are dispatched). If you choose to handle the
						  `mouseOver` event, then the same
						  event handler will run on a touch-enabled
						  device and a mouse enabled device. However,
						  if you choose to handle the
						  `touchOver` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchRollOut Dispatched when the user moves the point of

						  contact away from an InteractiveObject
						  instance on a touch-enabled device(such as
						  drags a finger from over a display object to
						  a point outside the display object on a
						  mobile phone or tablet with a touch screen).
						  Some devices might also interpret this
						  contact as a `rollOut` event.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, the
						  InteractiveObject instance can dispatch a
						  `rollOut` event or a
						  `touchRollOut` event, or both if
						  the current environment supports it. Choose
						  how you want to handle the user interaction.
						  Use the openfl.ui.Multitouch class to manage
						  touch event handling(enable touch gesture
						  event handling, simple touch point event
						  handling, or disable touch events so only
						  mouse events are dispatched). If you choose
						  to handle the `rollOut` event,
						  then the same event handler will run on a
						  touch-enabled device and a mouse enabled
						  device. However, if you choose to handle the
						  `touchRollOut` event, you can
						  design your event handler to respond to the
						  specific needs of a touch-enabled environment
						  and provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchRollOver Dispatched when the user moves the point of

						  contact over an InteractiveObject instance on
						  a touch-enabled device(such as drags a
						  finger from a point outside a display object
						  to a point over a display object on a mobile
						  phone or tablet with a touch screen). Some
						  devices might also interpret this contact as
						  a `rollOver` event.

						  Specifically, if a user moves a finger
						  over an InteractiveObject, the
						  InteractiveObject instance can dispatch a
						  `rollOver` event or a
						  `touchRollOver` event, or both if
						  the current environment supports it. Choose
						  how you want to handle the user interaction.
						  Use the openfl.ui.Multitouch class to manage
						  touch event handling(enable touch gesture
						  event handling, simple touch point event
						  handling, or disable touch events so only
						  mouse events are dispatched). If you choose
						  to handle the `rollOver` event,
						  then the same event handler will run on a
						  touch-enabled device and a mouse enabled
						  device. However, if you choose to handle the
						  `touchRollOver` event, you can
						  design your event handler to respond to the
						  specific needs of a touch-enabled environment
						  and provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

@event touchTap Dispatched when the user lifts the point of

						  contact over the same InteractiveObject
						  instance on which the contact was initiated
						  on a touch-enabled device(such as presses
						  and releases a finger from a single point
						  over a display object on a mobile phone or
						  tablet with a touch screen). Some devices
						  might also interpret this contact as a
						  `click` event.

						  Specifically, if a user taps a finger over
						  an InteractiveObject, the InteractiveObject
						  instance can dispatch a `click`
						  event or a `touchTap` event, or
						  both if the current environment supports it.
						  Choose how you want to handle the user
						  interaction. Use the openfl.ui.Multitouch
						  class to manage touch event handling(enable
						  touch gesture event handling, simple touch
						  point event handling, or disable touch events
						  so only mouse events are dispatched). If you
						  choose to handle the `click`
						  event, then the same event handler will run
						  on a touch-enabled device and a mouse enabled
						  device. However, if you choose to handle the
						  `touchTap` event, you can design
						  your event handler to respond to the specific
						  needs of a touch-enabled environment and
						  provide users with a richer touch-enabled
						  experience. You can also handle both events,
						  separately, to provide a different response
						  for a touch event than a mouse event.

						  **Note:** See the Multitouch class for
						  environment compatibility information.

Constructor

new ()

Calling the new InteractiveObject() constructor throws an ArgumentError exception. You can, however, call constructors for the following subclasses of InteractiveObject:

Variables

doubleClickEnabled:Bool

Specifies whether the object receives doubleClick events. The default value is false, which means that by default an InteractiveObject instance does not receive doubleClick events. If the doubleClickEnabled property is set to true, the instance receives doubleClick events within its bounds. The mouseEnabled property of the InteractiveObject instance must also be set to true for the object to receive doubleClick events.

No event is dispatched by setting this property. You must use the addEventListener() method to add an event listener for the doubleClick event.

focusRect:Null<Bool>

Specifies whether this object displays a focus rectangle. It can take one of three values: true, false, or null. Values of true and false work as expected, specifying whether or not the focus rectangle appears. A value of null indicates that this object obeys the stageFocusRect property of the Stage.

mouseEnabled:Bool

Specifies whether this object receives mouse, or other user input, messages. The default value is true, which means that by default any InteractiveObject instance that is on the display list receives mouse events or other user input events. If mouseEnabled is set to false, the instance does not receive any mouse events(or other user input events like keyboard events). Any children of this instance on the display list are not affected. To change the mouseEnabled behavior for all children of an object on the display list, use openfl.display.DisplayObjectContainer.mouseChildren.

No event is dispatched by setting this property. You must use the addEventListener() method to create interactive functionality.

needsSoftKeyboard:Bool

Specifies whether a virtual keyboard(an on-screen, software keyboard) should display when this InteractiveObject instance receives focus.

By default, the value is false and focusing an InteractiveObject instance does not raise a soft keyboard. If the needsSoftKeyboard property is set to true, the runtime raises a soft keyboard when the InteractiveObject instance is ready to accept user input. An InteractiveObject instance is ready to accept user input after a programmatic call to set the Stage focus property or a user interaction, such as a "tap." If the client system has a hardware keyboard available or does not support virtual keyboards, then the soft keyboard is not raised.

The InteractiveObject instance dispatches softKeyboardActivating, softKeyboardActivate, and softKeyboardDeactivate events when the soft keyboard raises and lowers.

Note: This property is not supported in AIR applications on iOS.

softKeyboardInputAreaOfInterest:Rectangle

Defines the area that should remain on-screen when a soft keyboard is displayed. If the needsSoftKeyboard property of this InteractiveObject is true, then the runtime adjusts the display as needed to keep the object in view while the user types. Ordinarily, the runtime uses the object bounds obtained from the DisplayObject.getBounds() method. You can specify a different area using this softKeyboardInputAreaOfInterest property.

Specify the softKeyboardInputAreaOfInterest in stage coordinates.

Note: On Android, the softKeyboardInputAreaOfInterest is not respected in landscape orientations.

tabEnabled:Bool

Specifies whether this object is in the tab order. If this object is in the tab order, the value is true; otherwise, the value is false. By default, the value is false, except for the following: For a SimpleButton object, the value is true. For a TextField object with type = "input", the value is true. * For a Sprite object or MovieClip object with buttonMode = true, the value is true.

tabIndex:Int

Specifies the tab ordering of objects in a SWF file. The tabIndex property is -1 by default, meaning no tab index is set for the object.

If any currently displayed object in the SWF file contains a tabIndex property, automatic tab ordering is disabled, and the tab ordering is calculated from the tabIndex properties of objects in the SWF file. The custom tab ordering includes only objects that have tabIndex properties.

The tabIndex property can be a non-negative integer. The objects are ordered according to their tabIndex properties, in ascending order. An object with a tabIndex value of 1 precedes an object with a tabIndex value of 2. Do not use the same tabIndex value for multiple objects.

The custom tab ordering that the tabIndex property defines is flat. This means that no attention is paid to the hierarchical relationships of objects in the SWF file. All objects in the SWF file with tabIndex properties are placed in the tab order, and the tab order is determined by the order of the tabIndex values.

Note: To set the tab order for TLFTextField instances, cast the display object child of the TLFTextField as an InteractiveObject, then set the tabIndex property. For example:

cast(tlfInstance.getChildAt(1), InteractiveObject).tabIndex = 3;

To reverse the tab order from the default setting for three instances of a TLFTextField object (tlfInstance1, tlfInstance2 and tlfInstance3), use:

cast(tlfInstance1.getChildAt(1), InteractiveObject).tabIndex = 3;
cast(tlfInstance2.getChildAt(1), InteractiveObject).tabIndex = 2;
cast(tlfInstance3.getChildAt(1), InteractiveObject).tabIndex = 1;

Methods

requestSoftKeyboard ():Bool

Raises a virtual keyboard.

Calling this method focuses the InteractiveObject instance and raises the soft keyboard, if necessary. The needsSoftKeyboard must also be true. A keyboard is not raised if a hardware keyboard is available, or if the client system does not support virtual keyboards.

Note: This method is not supported in AIR applications on iOS.

Returns:

A value of true means that the soft keyboard request was granted; false means that the soft keyboard was not raised.

Inherited Variables

Defined by DisplayObject

alpha:Float

Indicates the alpha transparency value of the object specified. Valid values are 0 (fully transparent) to 1 (fully opaque). The default value is 1. Display objects with alpha set to 0 are active, even though they are invisible.

blendMode:BlendMode

A value from the BlendMode class that specifies which blend mode to use. A bitmap can be drawn internally in two ways. If you have a blend mode enabled or an external clipping mask, the bitmap is drawn by adding a bitmap-filled square shape to the vector render. If you attempt to set this property to an invalid value, Flash runtimes set the value to BlendMode.NORMAL.

The blendMode property affects each pixel of the display object. Each pixel is composed of three constituent colors(red, green, and blue), and each constituent color has a value between 0x00 and 0xFF. Flash Player or Adobe AIR compares each constituent color of one pixel in the movie clip with the corresponding color of the pixel in the background. For example, if blendMode is set to BlendMode.LIGHTEN, Flash Player or Adobe AIR compares the red value of the display object with the red value of the background, and uses the lighter of the two as the value for the red component of the displayed color.

The following table describes the blendMode settings. The BlendMode class defines string values you can use. The illustrations in the table show blendMode values applied to a circular display object(2) superimposed on another display object(1).

Square Number 1 Circle Number 2

BlendMode ConstantIllustrationDescription
BlendMode.NORMALblend mode NORMALThe display object appears in front of the background. Pixel values of the display object override those of the background. Where the display object is transparent, the background is visible.
BlendMode.LAYERblend mode LAYERForces the creation of a transparency group for the display object. This means that the display object is pre-composed in a temporary buffer before it is processed further. This is done automatically if the display object is pre-cached using bitmap caching or if the display object is a display object container with at least one child object with a blendMode setting other than BlendMode.NORMAL. Not supported under GPU rendering.
BlendMode.MULTIPLYblend mode MULTIPLYMultiplies the values of the display object constituent colors by the colors of the background color, and then normalizes by dividing by 0xFF, resulting in darker colors. This setting is commonly used for shadows and depth effects.
For example, if a constituent color (such as red) of one pixel in the display object and the corresponding color of the pixel in the background both have the value 0x88, the multiplied result is 0x4840. Dividing by 0xFF yields a value of 0x48 for that constituent color, which is a darker shade than the color of the display object or the color of the background.
BlendMode.SCREENblend mode SCREENMultiplies the complement (inverse) of the display object color by the complement of the background color, resulting in a bleaching effect. This setting is commonly used for highlights or to remove black areas of the display object.
BlendMode.LIGHTENblend mode LIGHTENSelects the lighter of the constituent colors of the display object and the color of the background (the colors with the larger values). This setting is commonly used for superimposing type.
For example, if the display object has a pixel with an RGB value of 0xFFCC33, and the background pixel has an RGB value of 0xDDF800, the resulting RGB value for the displayed pixel is 0xFFF833 (because 0xFF > 0xDD, 0xCC < 0xF8, and 0x33 > 0x00 = 33). Not supported under GPU rendering.
BlendMode.DARKENblend mode DARKENSelects the darker of the constituent colors of the display object and the colors of the background (the colors with the smaller values). This setting is commonly used for superimposing type.
For example, if the display object has a pixel with an RGB value of 0xFFCC33, and the background pixel has an RGB value of 0xDDF800, the resulting RGB value for the displayed pixel is 0xDDCC00 (because 0xFF > 0xDD, 0xCC < 0xF8, and 0x33 > 0x00 = 33). Not supported under GPU rendering.
BlendMode.DIFFERENCEblend mode DIFFERENCECompares the constituent colors of the display object with the colors of its background, and subtracts the darker of the values of the two constituent colors from the lighter value. This setting is commonly used for more vibrant colors.
For example, if the display object has a pixel with an RGB value of 0xFFCC33, and the background pixel has an RGB value of 0xDDF800, the resulting RGB value for the displayed pixel is 0x222C33 (because 0xFF - 0xDD = 0x22, 0xF8 - 0xCC = 0x2C, and 0x33 - 0x00 = 0x33).
BlendMode.ADDblend mode ADDAdds the values of the constituent colors of the display object to the colors of its background, applying a ceiling of 0xFF. This setting is commonly used for animating a lightening dissolve between two objects.
For example, if the display object has a pixel with an RGB value of 0xAAA633, and the background pixel has an RGB value of 0xDD2200, the resulting RGB value for the displayed pixel is 0xFFC833 (because 0xAA + 0xDD > 0xFF, 0xA6 + 0x22 = 0xC8, and 0x33 + 0x00 = 0x33).
BlendMode.SUBTRACTblend mode SUBTRACTSubtracts the values of the constituent colors in the display object from the values of the background color, applying a floor of 0. This setting is commonly used for animating a darkening dissolve between two objects.
For example, if the display object has a pixel with an RGB value of 0xAA2233, and the background pixel has an RGB value of 0xDDA600, the resulting RGB value for the displayed pixel is 0x338400 (because 0xDD - 0xAA = 0x33, 0xA6 - 0x22 = 0x84, and 0x00 - 0x33 < 0x00).
BlendMode.INVERTblend mode INVERTInverts the background.
BlendMode.ALPHAblend mode ALPHAApplies the alpha value of each pixel of the display object to the background. This requires the blendMode setting of the parent display object to be set to BlendMode.LAYER. For example, in the illustration, the parent display object, which is a white background, has blendMode = BlendMode.LAYER. Not supported under GPU rendering.
BlendMode.ERASEblend mode ERASEErases the background based on the alpha value of the display object. This requires the blendMode of the parent display object to be set to BlendMode.LAYER. For example, in the illustration, the parent display object, which is a white background, has blendMode = BlendMode.LAYER. Not supported under GPU rendering.
BlendMode.OVERLAYblend mode OVERLAYAdjusts the color of each pixel based on the darkness of the background. If the background is lighter than 50% gray, the display object and background colors are screened, which results in a lighter color. If the background is darker than 50% gray, the colors are multiplied, which results in a darker color. This setting is commonly used for shading effects. Not supported under GPU rendering.
BlendMode.HARDLIGHTblend mode HARDLIGHTAdjusts the color of each pixel based on the darkness of the display object. If the display object is lighter than 50% gray, the display object and background colors are screened, which results in a lighter color. If the display object is darker than 50% gray, the colors are multiplied, which results in a darker color. This setting is commonly used for shading effects. Not supported under GPU rendering.
BlendMode.SHADERN/AAdjusts the color using a custom shader routine. The shader that is used is specified as the Shader instance assigned to the blendShader property. Setting the blendShader property of a display object to a Shader instance automatically sets the display object's blendMode property to BlendMode.SHADER. If the blendMode property is set to BlendMode.SHADER without first setting the blendShader property, the blendMode property is set to BlendMode.NORMAL. Not supported under GPU rendering.

cacheAsBitmap:Bool

All vector data for a display object that has a cached bitmap is drawn to the bitmap instead of the main display. If cacheAsBitmapMatrix is null or unsupported, the bitmap is then copied to the main display as unstretched, unrotated pixels snapped to the nearest pixel boundaries. Pixels are mapped 1 to 1 with the parent object. If the bounds of the bitmap change, the bitmap is recreated instead of being stretched.

If cacheAsBitmapMatrix is non-null and supported, the object is drawn to the off-screen bitmap using that matrix and the stretched and/or rotated results of that rendering are used to draw the object to the main display.

No internal bitmap is created unless the cacheAsBitmap property is set to true.

After you set the cacheAsBitmap property to true, the rendering does not change, however the display object performs pixel snapping automatically. The animation speed can be significantly faster depending on the complexity of the vector content.

The cacheAsBitmap property is automatically set to true whenever you apply a filter to a display object(when its filter array is not empty), and if a display object has a filter applied to it, cacheAsBitmap is reported as true for that display object, even if you set the property to false. If you clear all filters for a display object, the cacheAsBitmap setting changes to what it was last set to.

A display object does not use a bitmap even if the cacheAsBitmap property is set to true and instead renders from vector data in the following cases:

  • The bitmap is too large. In AIR 1.5 and Flash Player 10, the maximum size for a bitmap image is 8,191 pixels in width or height, and the total number of pixels cannot exceed 16,777,215 pixels.(So, if a bitmap image is 8,191 pixels wide, it can only be 2,048 pixels high.) In Flash Player 9 and earlier, the limitation is is 2880 pixels in height and 2,880 pixels in width.
  • The bitmap fails to allocate(out of memory error).

The cacheAsBitmap property is best used with movie clips that have mostly static content and that do not scale and rotate frequently. With such movie clips, cacheAsBitmap can lead to performance increases when the movie clip is translated(when its x and y position is changed).

cacheAsBitmapMatrix:Matrix

If non-null, this Matrix object defines how a display object is rendered when cacheAsBitmap is set to true. The application uses this matrix as a transformation matrix that is applied when rendering the bitmap version of the display object.

AIR profile support: This feature is supported on mobile devices, but it is not supported on desktop operating systems. It also has limited support on AIR for TV devices. Specifically, on AIR for TV devices, supported transformations include scaling and translation, but not rotation and skewing. See AIR Profile Support for more information regarding API support across multiple profiles.

With cacheAsBitmapMatrix set, the application retains a cached bitmap image across various 2D transformations, including translation, rotation, and scaling. If the application uses hardware acceleration, the object will be stored in video memory as a texture. This allows the GPU to apply the supported transformations to the object. The GPU can perform these transformations faster than the CPU.

To use the hardware acceleration, set Rendering to GPU in the General tab of the iPhone Settings dialog box in Flash Professional CS5. Or set the renderMode property to gpu in the application descriptor file. Note that AIR for TV devices automatically use hardware acceleration if it is available.

For example, the following code sends an untransformed bitmap representation of the display object to the GPU:

var matrix:Matrix = new Matrix(); // creates an identity matrix
mySprite.cacheAsBitmapMatrix = matrix;
mySprite.cacheAsBitmap = true;

Usually, the identity matrix (new Matrix()) suffices. However, you can use another matrix, such as a scaled-down matrix, to upload a different bitmap to the GPU. For example, the following example applies a cacheAsBitmapMatrix matrix that is scaled by 0.5 on the x and y axes. The bitmap object that the GPU uses is smaller, however the GPU adjusts its size to match the transform.matrix property of the display object:

var matrix:Matrix = new Matrix(); // creates an identity matrix
matrix.scale(0.5, 0.5); // scales the matrix
mySprite.cacheAsBitmapMatrix = matrix;
mySprite.cacheAsBitmap = true;

Generally, you should choose to use a matrix that transforms the display object to the size that it will appear in the application. For example, if your application displays the bitmap version of the sprite scaled down by a half, use a matrix that scales down by a half. If you application will display the sprite larger than its current dimensions, use a matrix that scales up by that factor.

Note: The cacheAsBitmapMatrix property is suitable for 2D transformations. If you need to apply transformations in 3D, you may do so by setting a 3D property of the object and manipulating its transform.matrix3D property. If the application is packaged using GPU mode, this allows the 3D transforms to be applied to the object by the GPU. The cacheAsBitmapMatrix is ignored for 3D objects.

filters:Array<BitmapFilter>

An indexed array that contains each filter object currently associated with the display object. The openfl.filters package contains several classes that define specific filters you can use.

Filters can be applied in Flash Professional at design time, or at run time by using ActionScript code. To apply a filter by using ActionScript, you must make a temporary copy of the entire filters array, modify the temporary array, then assign the value of the temporary array back to the filters array. You cannot directly add a new filter object to the filters array.

To add a filter by using ActionScript, perform the following steps (assume that the target display object is named myDisplayObject):

  1. Create a new filter object by using the constructor method of your chosen filter class.
  2. Assign the value of the myDisplayObject.filters array to a temporary array, such as one named myFilters.
  3. Add the new filter object to the myFilters temporary array.
  4. Assign the value of the temporary array to the myDisplayObject.filters array.

If the filters array is undefined, you do not need to use a temporary array. Instead, you can directly assign an array literal that contains one or more filter objects that you create. The first example in the Examples section adds a drop shadow filter by using code that handles both defined and undefined filters arrays.

To modify an existing filter object, you must use the technique of modifying a copy of the filters array:

  1. Assign the value of the filters array to a temporary array, such as one named myFilters.
  2. Modify the property by using the temporary array, myFilters. For example, to set the quality property of the first filter in the array, you could use the following code: myFilters[0].quality = 1;
  3. Assign the value of the temporary array to the filters array.

At load time, if a display object has an associated filter, it is marked to cache itself as a transparent bitmap. From this point forward, as long as the display object has a valid filter list, the player caches the display object as a bitmap. This source bitmap is used as a source image for the filter effects. Each display object usually has two bitmaps: one with the original unfiltered source display object and another for the final image after filtering. The final image is used when rendering. As long as the display object does not change, the final image does not need updating.

The openfl.filters package includes classes for filters. For example, to create a DropShadow filter, you would write:

Throws:

ArgumentError

When filters includes a ShaderFilter and the shader output type is not compatible with this operation(the shader must specify a pixel4 output).

ArgumentError

When filters includes a ShaderFilter and the shader doesn't specify any image input or the first input is not an image4 input.

ArgumentError

When filters includes a ShaderFilter and the shader specifies an image input that isn't provided.

ArgumentError

When filters includes a ShaderFilter, a ByteArray or Vector. instance as a shader input, and the width and height properties aren't specified for the ShaderInput object, or the specified values don't match the amount of data in the input data. See the ShaderInput.input property for more information.

height:Float

Indicates the height of the display object, in pixels. The height is calculated based on the bounds of the content of the display object. When you set the height property, the scaleY property is adjusted accordingly, as shown in the following code:

Except for TextField and Video objects, a display object with no content(such as an empty sprite) has a height of 0, even if you try to set height to a different value.

read onlyloaderInfo:LoaderInfo

Returns a LoaderInfo object containing information about loading the file to which this display object belongs. The loaderInfo property is defined only for the root display object of a SWF file or for a loaded Bitmap(not for a Bitmap that is drawn with ActionScript). To find the loaderInfo object associated with the SWF file that contains a display object named myDisplayObject, use myDisplayObject.root.loaderInfo.

A large SWF file can monitor its download by calling this.root.loaderInfo.addEventListener(Event.COMPLETE, func).

mask:DisplayObject

The calling display object is masked by the specified mask object. To ensure that masking works when the Stage is scaled, the mask display object must be in an active part of the display list. The mask object itself is not drawn. Set mask to null to remove the mask.

To be able to scale a mask object, it must be on the display list. To be able to drag a mask Sprite object(by calling its startDrag() method), it must be on the display list. To call the startDrag() method for a mask sprite based on a mouseDown event being dispatched by the sprite, set the sprite's buttonMode property to true.

When display objects are cached by setting the cacheAsBitmap property to true an the cacheAsBitmapMatrix property to a Matrix object, both the mask and the display object being masked must be part of the same cached bitmap. Thus, if the display object is cached, then the mask must be a child of the display object. If an ancestor of the display object on the display list is cached, then the mask must be a child of that ancestor or one of its descendents. If more than one ancestor of the masked object is cached, then the mask must be a descendent of the cached container closest to the masked object in the display list.

Note: A single mask object cannot be used to mask more than one calling display object. When the mask is assigned to a second display object, it is removed as the mask of the first object, and that object's mask property becomes null.

read onlymouseX:Float

Indicates the x coordinate of the mouse or user input device position, in pixels.

Note: For a DisplayObject that has been rotated, the returned x coordinate will reflect the non-rotated object.

read onlymouseY:Float

Indicates the y coordinate of the mouse or user input device position, in pixels.

Note: For a DisplayObject that has been rotated, the returned y coordinate will reflect the non-rotated object.

name:String

Indicates the instance name of the DisplayObject. The object can be identified in the child list of its parent display object container by calling the getChildByName() method of the display object container.

Throws:

IllegalOperationError

If you are attempting to set this property on an object that was placed on the timeline in the Flash authoring tool.

opaqueBackground:Null<Int>

Specifies whether the display object is opaque with a certain background color. A transparent bitmap contains alpha channel data and is drawn transparently. An opaque bitmap has no alpha channel(and renders faster than a transparent bitmap). If the bitmap is opaque, you specify its own background color to use.

If set to a number value, the surface is opaque(not transparent) with the RGB background color that the number specifies. If set to null(the default value), the display object has a transparent background.

The opaqueBackground property is intended mainly for use with the cacheAsBitmap property, for rendering optimization. For display objects in which the cacheAsBitmap property is set to true, setting opaqueBackground can improve rendering performance.

The opaque background region is not matched when calling the hitTestPoint() method with the shapeFlag parameter set to true.

The opaque background region does not respond to mouse events.

read onlyparent:DisplayObjectContainer

Indicates the DisplayObjectContainer object that contains this display object. Use the parent property to specify a relative path to display objects that are above the current display object in the display list hierarchy.

You can use parent to move up multiple levels in the display list as in the following:

this.parent.parent.alpha = 20;

Throws:

SecurityError

The parent display object belongs to a security sandbox to which you do not have access. You can avoid this situation by having the parent movie call the Security.allowDomain() method.

read onlyroot:DisplayObject

For a display object in a loaded SWF file, the root property is the top-most display object in the portion of the display list's tree structure represented by that SWF file. For a Bitmap object representing a loaded image file, the root property is the Bitmap object itself. For the instance of the main class of the first SWF file loaded, the root property is the display object itself. The root property of the Stage object is the Stage object itself. The root property is set to null for any display object that has not been added to the display list, unless it has been added to a display object container that is off the display list but that is a child of the top-most display object in a loaded SWF file.

For example, if you create a new Sprite object by calling the Sprite() constructor method, its root property is null until you add it to the display list(or to a display object container that is off the display list but that is a child of the top-most display object in a SWF file).

For a loaded SWF file, even though the Loader object used to load the file may not be on the display list, the top-most display object in the SWF file has its root property set to itself. The Loader object does not have its root property set until it is added as a child of a display object for which the root property is set.

rotation:Float

Indicates the rotation of the DisplayObject instance, in degrees, from its original orientation. Values from 0 to 180 represent clockwise rotation; values from 0 to -180 represent counterclockwise rotation. Values outside this range are added to or subtracted from 360 to obtain a value within the range. For example, the statement my_video.rotation = 450 is the same as my_video.rotation = 90.

scale9Grid:Rectangle

The current scaling grid that is in effect. If set to null, the entire display object is scaled normally when any scale transformation is applied.

When you define the scale9Grid property, the display object is divided into a grid with nine regions based on the scale9Grid rectangle, which defines the center region of the grid. The eight other regions of the grid are the following areas:

  • The upper-left corner outside of the rectangle
  • The area above the rectangle
  • The upper-right corner outside of the rectangle
  • The area to the left of the rectangle
  • The area to the right of the rectangle
  • The lower-left corner outside of the rectangle
  • The area below the rectangle
  • The lower-right corner outside of the rectangle

You can think of the eight regions outside of the center (defined by the rectangle) as being like a picture frame that has special rules applied to it when scaled.

Note: Content that is not rendered through the graphics interface of a display object will not be affected by the scale9Grid property.

When the scale9Grid property is set and a display object is scaled, all text and gradients are scaled normally; however, for other types of objects the following rules apply:

  • Content in the center region is scaled normally.
  • Content in the corners is not scaled.
  • Content in the top and bottom regions is scaled horizontally only.
  • Content in the left and right regions is scaled vertically only.
  • All fills (including bitmaps, video, and gradients) are stretched to fit their shapes.

If a display object is rotated, all subsequent scaling is normal(and the scale9Grid property is ignored).

For example, consider the following display object and a rectangle that is applied as the display object's scale9Grid:

display object image
The display object.
display object scale 9 region
The red rectangle shows the scale9Grid.

When the display object is scaled or stretched, the objects within the rectangle scale normally, but the objects outside of the rectangle scale according to the scale9Grid rules:

Scaled to 75%:display object at 75%
Scaled to 50%:display object at 50%
Scaled to 25%:display object at 25%
Stretched horizontally 150%:display stretched 150%

A common use for setting scale9Grid is to set up a display object to be used as a component, in which edge regions retain the same width when the component is scaled.

Throws:

ArgumentError

If you pass an invalid argument to the method.

scaleX:Float

Indicates the horizontal scale (percentage) of the object as applied from the registration point. The default registration point is (0,0). 1.0 equals 100% scale.

Scaling the local coordinate system changes the x and y property values, which are defined in whole pixels.

scaleY:Float

Indicates the vertical scale (percentage) of an object as applied from the registration point of the object. The default registration point is (0,0). 1.0 is 100% scale.

Scaling the local coordinate system changes the x and y property values, which are defined in whole pixels.

scrollRect:Rectangle

The scroll rectangle bounds of the display object. The display object is cropped to the size defined by the rectangle, and it scrolls within the rectangle when you change the x and y properties of the scrollRect object.

The properties of the scrollRect Rectangle object use the display object's coordinate space and are scaled just like the overall display object. The corner bounds of the cropped window on the scrolling display object are the origin of the display object(0,0) and the point defined by the width and height of the rectangle. They are not centered around the origin, but use the origin to define the upper-left corner of the area. A scrolled display object always scrolls in whole pixel increments.

You can scroll an object left and right by setting the x property of the scrollRect Rectangle object. You can scroll an object up and down by setting the y property of the scrollRect Rectangle object. If the display object is rotated 90° and you scroll it left and right, the display object actually scrolls up and down.

shader:Shader

BETA**

Applies a custom Shader object to use when rendering this display object (or its children) when using hardware rendering. This occurs as a single-pass render on this object only, if visible. In order to apply a post-process effect to multiple display objects at once, enable cacheAsBitmap or use the filters property with a ShaderFilter

read onlystage:Stage

The Stage of the display object. A Flash runtime application has only one Stage object. For example, you can create and load multiple display objects into the display list, and the stage property of each display object refers to the same Stage object(even if the display object belongs to a loaded SWF file).

If a display object is not added to the display list, its stage property is set to null.

transform:Transform

An object with properties pertaining to a display object's matrix, color transform, and pixel bounds. The specific properties - matrix, colorTransform, and three read-only properties (concatenatedMatrix, concatenatedColorTransform, and pixelBounds) - are described in the entry for the Transform class.

Each of the transform object's properties is itself an object. This concept is important because the only way to set new values for the matrix or colorTransform objects is to create a new object and copy that object into the transform.matrix or transform.colorTransform property.

For example, to increase the tx value of a display object's matrix, you must make a copy of the entire matrix object, then copy the new object into the matrix property of the transform object: var myMatrix:Matrix = myDisplayObject.transform.matrix; myMatrix.tx += 10; myDisplayObject.transform.matrix = myMatrix;

You cannot directly set the tx property. The following code has no effect on myDisplayObject: myDisplayObject.transform.matrix.tx += 10;

You can also copy an entire transform object and assign it to another display object's transform property. For example, the following code copies the entire transform object from myOldDisplayObj to myNewDisplayObj: myNewDisplayObj.transform = myOldDisplayObj.transform;

The resulting display object, myNewDisplayObj, now has the same values for its matrix, color transform, and pixel bounds as the old display object, myOldDisplayObj.

Note that AIR for TV devices use hardware acceleration, if it is available, for color transforms.

visible:Bool

Whether or not the display object is visible. Display objects that are not visible are disabled. For example, if visible=false for an InteractiveObject instance, it cannot be clicked.

width:Float

Indicates the width of the display object, in pixels. The width is calculated based on the bounds of the content of the display object. When you set the width property, the scaleX property is adjusted accordingly, as shown in the following code:

Except for TextField and Video objects, a display object with no content(such as an empty sprite) has a width of 0, even if you try to set width to a different value.

x:Float

Indicates the x coordinate of the DisplayObject instance relative to the local coordinates of the parent DisplayObjectContainer. If the object is inside a DisplayObjectContainer that has transformations, it is in the local coordinate system of the enclosing DisplayObjectContainer. Thus, for a DisplayObjectContainer rotated 90° counterclockwise, the DisplayObjectContainer's children inherit a coordinate system that is rotated 90° counterclockwise. The object's coordinates refer to the registration point position.

y:Float

Indicates the y coordinate of the DisplayObject instance relative to the local coordinates of the parent DisplayObjectContainer. If the object is inside a DisplayObjectContainer that has transformations, it is in the local coordinate system of the enclosing DisplayObjectContainer. Thus, for a DisplayObjectContainer rotated 90° counterclockwise, the DisplayObjectContainer's children inherit a coordinate system that is rotated 90° counterclockwise. The object's coordinates refer to the registration point position.

Inherited Methods

Defined by DisplayObject

getBounds (targetCoordinateSpace:DisplayObject):Rectangle

Returns a rectangle that defines the area of the display object relative to the coordinate system of the targetCoordinateSpace object. Consider the following code, which shows how the rectangle returned can vary depending on the targetCoordinateSpace parameter that you pass to the method:

Note: Use the localToGlobal() and globalToLocal() methods to convert the display object's local coordinates to display coordinates, or display coordinates to local coordinates, respectively.

The getBounds() method is similar to the getRect() method; however, the Rectangle returned by the getBounds() method includes any strokes on shapes, whereas the Rectangle returned by the getRect() method does not. For an example, see the description of the getRect() method.

Parameters:

targetCoordinateSpace

The display object that defines the coordinate system to use.

Returns:

The rectangle that defines the area of the display object relative to the targetCoordinateSpace object's coordinate system.

getRect (targetCoordinateSpace:DisplayObject):Rectangle

Returns a rectangle that defines the boundary of the display object, based on the coordinate system defined by the targetCoordinateSpace parameter, excluding any strokes on shapes. The values that the getRect() method returns are the same or smaller than those returned by the getBounds() method.

Note: Use localToGlobal() and globalToLocal() methods to convert the display object's local coordinates to Stage coordinates, or Stage coordinates to local coordinates, respectively.

Parameters:

targetCoordinateSpace

The display object that defines the coordinate system to use.

Returns:

The rectangle that defines the area of the display object relative to the targetCoordinateSpace object's coordinate system.

globalToLocal (pos:Point):Point

Converts the point object from the Stage(global) coordinates to the display object's(local) coordinates.

To use this method, first create an instance of the Point class. The x and y values that you assign represent global coordinates because they relate to the origin(0,0) of the main display area. Then pass the Point instance as the parameter to the globalToLocal() method. The method returns a new Point object with x and y values that relate to the origin of the display object instead of the origin of the Stage.

Parameters:

point

An object created with the Point class. The Point object specifies the x and y coordinates as properties.

Returns:

A Point object with coordinates relative to the display object.

hitTestObject (obj:DisplayObject):Bool

Evaluates the bounding box of the display object to see if it overlaps or intersects with the bounding box of the obj display object.

Parameters:

obj

The display object to test against.

Returns:

true if the bounding boxes of the display objects intersect; false if not.

hitTestPoint (x:Float, y:Float, shapeFlag:Bool = false):Bool

Evaluates the display object to see if it overlaps or intersects with the point specified by the x and y parameters. The x and y parameters specify a point in the coordinate space of the Stage, not the display object container that contains the display object(unless that display object container is the Stage).

Parameters:

x

The x coordinate to test against this object.

y

The y coordinate to test against this object.

shapeFlag

Whether to check against the actual pixels of the object (true) or the bounding box (false).

Returns:

true if the display object overlaps or intersects with the specified point; false otherwise.

invalidate ():Void

Calling the invalidate() method signals to have the current object redrawn the next time the object is eligible to be rendered.

localToGlobal (point:Point):Point

Converts the point object from the display object's(local) coordinates to the Stage(global) coordinates.

This method allows you to convert any given x and y coordinates from values that are relative to the origin(0,0) of a specific display object(local coordinates) to values that are relative to the origin of the Stage(global coordinates).

To use this method, first create an instance of the Point class. The x and y values that you assign represent local coordinates because they relate to the origin of the display object.

You then pass the Point instance that you created as the parameter to the localToGlobal() method. The method returns a new Point object with x and y values that relate to the origin of the Stage instead of the origin of the display object.

Parameters:

point

The name or identifier of a point created with the Point class, specifying the x and y coordinates as properties.

Returns:

A Point object with coordinates relative to the Stage.

Defined by EventDispatcher

hasEventListener (type:String):Bool

Checks whether the EventDispatcher object has any listeners registered for a specific type of event. This allows you to determine where an EventDispatcher object has altered handling of an event type in the event flow hierarchy. To determine whether a specific event type actually triggers an event listener, use willTrigger().

The difference between hasEventListener() and willTrigger() is that hasEventListener() examines only the object to which it belongs, whereas willTrigger() examines the entire event flow for the event specified by the type parameter.

When hasEventListener() is called from a LoaderInfo object, only the listeners that the caller can access are considered.

Parameters:

type

The type of event.

Returns:

A value of true if a listener of the specified type is registered; false otherwise.

toString ():String

willTrigger (type:String):Bool

Checks whether an event listener is registered with this EventDispatcher object or any of its ancestors for the specified event type. This method returns true if an event listener is triggered during any phase of the event flow when an event of the specified type is dispatched to this EventDispatcher object or any of its descendants.

The difference between the hasEventListener() and the willTrigger() methods is that hasEventListener() examines only the object to which it belongs, whereas the willTrigger() method examines the entire event flow for the event specified by the type parameter.

When willTrigger() is called from a LoaderInfo object, only the listeners that the caller can access are considered.

Parameters:

type

The type of event.

Returns:

A value of true if a listener of the specified type will be triggered; false otherwise.