Chapter 7-

The Standard X Toolkit - Intrinsics and Widgets

Introduction

This chapter describes the X standard toolkit, Xt, in depth. We look at the basic user interface components or 'widgets' that it provides, and the different types of widgets there are to perform different functions. We describe how vendocs have developed sets of widgets and management functions to provide a particular graphical user interface, for example, Motif or OPEN LOOK. We also examine the X Toolkit 'Intrinsics', whieh is the framework for creating and manipulating widgets, and which provides a lot of functionality to simplify writing X applications. To finish, we look at some special-purpose toolkits.

7.1 The standard X Toolkit, `Xt'

X provides a standard basic mechanism, the `Intrinsics', for building toolkits of user interface components such as scrollbars, buttons, menus, etc. These components are called `widgets'. The combination of the Intrinsics and a set of widgets makes up a toolkit. Many different toolkits, with different look and feels, have been built using the same Intrinsics.

The Intrinsics is officially part of X - it is a standard laid down by the MIT X Consortium. The Intrinsics is not a toolkit in itself - instead, think of it as a construction kit for building toolkits. Like the rest of X, it is `policy-free', so it doesn't impose a look and feel. It provides mechanisms to create and manage user interface components, but it doesn't specify which components these are, or what they look like, or how they are to behave.

The Intrinsics defines a new type of object called a widget. A widget is a user interface component built within the framework of the standard X Intrinsics. The Intrinsics allows different types of widgets to be created, both `simple' visible objects such as buttons, labels, text-entry fields, etc., and `composite' widgets which are containers for other widgets - so they can be maintained in rows (as in a menu-bar) or columns (as in a pull-down menu), or laid out in other programmer-defined ways. Figure A shows a very idealized picture of how we would like to use widgets: the cut-outs on the table-top and in the programmer's hand are how we would like to think of widgets !

It is the individual widgets that provide look and feel. For example, the pushbutton, scrollbar, and menu widgets will determine a lot of the appearance of an application, and a good deal of its user interface behaviour as well. More complex aspects of its behaviour - how you can use the keyboard to navigate from button to button or invoke specific menus, for example - is determined by the composite widgets, which incorporate special event processing code for this purpose. Because of this, it is possible to build many toolkits, each with a different look and feel, by building up different sets of widgets, even though all use the same standard Intrinsics. For example, the Motif toolkit, and the Xt+ and OLIT toolkits which implement the OPEN LOOK look and feel, are all Intrinsics-based.

In the next module we look at the relationship between the Toolkit and the Xlib standard X library.

Figure A. How we would like to think of widgets.

7.1.1 How the Toolkit is related to Xlib

The Toolkit complements rather than supercedes Xlib. The Toolkit primarily addresses fairly high-level user interface issues, whereas Xlib provides the fundamental support for all X operations, including much low-level processing. The X toolkit (or rather, the many toolkits that have been based on the foundation provided by the Intrinsics) doesn't supersede Xlib. It doesn't duplicate the low-level graphics and other functions which Xlib provides; as shown in Figure A, the applieation still has access to Xlib, and can continue to use Xlib functions where necessaty. This approach is used because many of the Xlib functions are needed for full flexibility of graphics operations: to provide these in the toolkit as well would just be 100% duplication. Instead, toolkits concentrate on the user interface, which is a higher level of abstraction and an area that Xlib doesn't address at all.

As we mentioned in the previous module, the relationship of the toolkit to Xlib is similar to that of standard I/O libraries to the raw operating system calls. The higherlevel functions are usually quicker to learn [Footnote: X Toolkit (Intrinsics) programming has often been criticized for being difficult to learn. This was mainly because the Toolkit documentation from MIT was very brief and highly technical, emphasizing the internals of the Toolkit, rather than how to use it. This problem has been addressed since, with the publication of many excellent books on programming the Toolkit, many of which assume no prior knowledge of X.] and easier to use because they present the programmer with a simpler application program interface or API (Figure B). They frequently give better performance as well. In fact, the programmer doesn't need to know anything about Xlib at all for many applications; where all the X programming is for building and handling the user interface, this can be done using the Toolkit exclusively. It is usually only for detailed and specific control of low-level graphics operations that the programmer would need to use Xlib.

The Intrinsics is written in C, so it can be ported to as many different systems as possible.

Now we go on to look at what widgets really are, and how we use the Intrinsics to handle them.

Figure A. The toolkit does not supersede Xlib's functionality.

Figure B. Using a toolkit presents a simpler API to the programmer.

7.2 Widgets and widget sets are building blocks

Toolkit widgets are basic user interface components which are used either directly, to build the user interface of the application, or combined, to form more complex objects which in turn are used in the application. Widgets are usually provided in `widget sets' - consistent packages of widgets designed to work well together.

Widgets are building blocks which the applications programmer uses to assemble the user interface of the application. A widget consists of an X window plus some data structures and procedures. Widgets come `ready-made' - just like standard functions provided in subroutine libraries, they have usually been written by somebody else (your system's vendor, or a third-party software provider) and you just use them as they come.

The reason you can use widgets from different suppliers is that they are objects, in the object-oriented programming sense. Widgets hide details of their implementation from the programmer using them, by encapsulating much of the information and the procedures necessary for them to operate correctly. For instance, when the window of a pushbutton widget is covered up and re-exposed, it will receive an exposure event; however, as all the graphics in the button window have been drawn there by the widget itself, it knows what needs to be redrawn and the widget has within it an expose procedure which is automatieally called for the expose event, so the applications programmer doesn't have to wony about exposures. Similarly, widgets have procedures built-in to handle other occurrenees such as resizing, and keyboard or mouse input where appropriate.

There are two broad classes of widgets, composite and primitive. Composite widgets are containers or boxes for other widgets and usually don't display any visual information in themselves. Primitive widgets perform some function, and display it graphically; they cannot act as containers for other widgets. Within an application, widgets are organized in a tree structure, as shown in Figure A. The leaves of the tree are usually primitive widgets, and the higher-level nodes of the tree must be composite widgets, as only they can have child widgets.

You can write your own widgets if you wish, either starting from scratch, or by subclassing your widget from an existing one which provides some of the functionality you require. (For example, if you had a histogram widget, you might subclass a pie-chart widget from it. The new widget provides much of the same function as the old, and will be very similar to it in many ways, but it will need new internal procedures built into it to display data in pie-chart rather than histogram format.) However, writing widgets of any type is advanced work, and should really be left for specialist programmers.

We cover composite widgets and primitive widgets in more detail in the modules after this.

Widget sets

While the design of the Toolkit is such that you should be able to use widgets from different vendors in the one program, widgets usually come in sets. A widget set will normally provide:

MIT provides the Athena widget set as a sample implementation. It is used for MIT's own applications, but isn't widely used commercially. The predominant Motif and OPEN LOOK look and feels have toolkit-based implementations (as well as others) and each provides a large set of widgets.

You buy widget sets from your workstation manufacturer or your X software vendor, or from third parties. There are also many useful widgets available free on the public networks.

This module and the next three modules describe what widgets are; Module 7.3 explains how we manipulate them using the Intrinsics.

Figure A. Widgets are organized in a tree structure within an application.

7.2.1 Composite widgets

Composite widgets are basically containers for other widgets - boxes to put them in, and perhaps keep them in some specified form of layout. They vary greatly in the functionality they offer, and their degree of sophistication. Toolkits often provide `complex' widgets consisting of a container and child widgets preconfigured to perform some specialized function.

The simplest type of composite widget lets you position the child widgets in it initially: you must specify their size and location, and they stay like that forever. If the container widget gets smaller, the position of the children is not adjusted, and they may get truncated. Conversely, if the container gets bigger, the children are left as they were, with a large expanse of space beside or below them (Figure A).

A more advanced container widget automatically resizes its children, often on a perchild basis, as it gets bigger or smaller itself. For example, in the application illustrated in Figure B, the small buttons remain fixed in size and location, the menu bar remains constant in height, but its width stretches or shrinks so that it is always the full width of the application's window, and the large work area containing the graphics expands so that it always uses the maximum area of the application's window available to it.

Other composite widgets perform functions specialized for certain tasks. Some will lay out child widgets in a horizontal row to form a menu-bar, and will perform special keyboard-input handling to enable menu items to be selected by typing rather than by clicking with the mouse. Others will lay out their children vertically and resize them so they are all the same width, to form pull-down menus. Still others will manage collections of buttons, and ensure that if one is pressed `in' to denote selection of one item, all the others are `out' and not selected, to form a set of `radio buttons', giving mutually exclusive selection.

There are still other composite widgets which consist of a container plus some specific children all made up into one pre-cnnfigured building block. For example, the Motif selection box shown in Figure C consists of a composite widget and many child widgets, including a list widget with the list of words, a scrollbar, a text widget for the `selection', plus a separator (the horizontal line) and three buttons. However, to the applications programmer this appears more or less as a single object and can be manipulated as such. Other examples of these compound objects are simple dialogs which prompt for confirmation, give information, or issue warnings.

In the next module we go on to look at primitive widgets in detail.

Figure A. A simple manager widget can truncate its children.
Figure B. How an advanced manager widget allows sophisticated layout policies

Figure C. A Motif selection box widget.

7.2.2 Primitive widgets

Each primitive widget provides a single, specific type of function, usually for controlling some aspect of the user interface. Primitive widgets cannot contain children.

Most primitive widgets are controls of some kind, and usually have a `look' or graphical appearance indicating what type of object the widget is, how it works, and giving feedback about what is happening as you use the object. For example, a scrollbar widget controls motion of something, perhaps scrolling a piece of graphics, or a set of items in a list. Its visual appearance, with arrows at either end (in the case of the Motif scrollbar shown in Figure A) suggests that it allows movement in two directions, and the 3-D appearanee of the arrows and the slider suggests that you can do things to them (like pull at them, or click on them); and when you use the scrollbar, the slider does indeed move, giving you feedback, both that motion is actually happening, and on the nature of that motion.

Other typical primitive widgets include pushbuttons, labels, slider widgets, singleline or multi-line text widgets, `blank canvas' widgets for drawing your own graphics into, arrows, and separators (Figure B).

In the next module we consider `gadgets', which are replacements for simple widgets, and which may offer improved performance.

Figure A. The widget graphic indicates what it is and hints how it works
Figure B. Examples of some primitive widgets.

7.2.3 Gadgets - windowless replacements for widgets

A `gadget' is a replacement for a primitive widget; it performs the same function but has no X window associated with it. Gadgets are supposed to improve performance, but this may not always be the case. Gadgets cannot have children.

Gadgets are defined by some toolkits, and are used a lot in Motif in particular. They are similar to widgets, but they do not have any X window associated with them, and therefore require no memory in the server. They are used by the applications programmer in exactly the same way as widgets, but because they don't have a window of their own, there are some restrictions on what you can do with them:

Figure A shows a menu built with widget button children, Figure B shows the same menu with gadget children, and the corresponding window trees are shown in Figures C and D.

Gadgets were invented for performance reasons. Early studies showed that the performance bottleneck with widgets was in the basic handling of their associated windows in the server; also, each window you use requires some memory in the server, so if your application contains many widgets, you are using a lot of windows and therefore server memory. Accordingly, gadgets were developed. However, later studies indicate that the servers used for this work were particularly bad at manipulating windows, so a better solution would have been to fix the server. Moreover, recent releases of X have dramatically improved window manipulation performance within the server, and have reduced the amount of server memory required per window by a factor of three (to about 100 bytes). And finally, more recent studies indicate that the extra event-handling load imposed by gadgets often outweighs any other performance benefit they may have.

Gadgets in detail - implementation and performance

A gadget is like a widget, but has no X window of its own. Thus it has most of the data structures of a widget, but the area of the screen where it is drawn is actually part of the window belonging to the gadget's parent (which must of course be a widget, as gadgets can't have children). For simplicity, let us consider a pushbutton gadget which is a pane on a menu, that is, the gadget is a child of a menupane widget.

When you press the gadget pushbutton, this <ButtonPress> event occurs in the menupane widget (because an event occurs in, and is related to, a window). The menupane widget has to tell the gadget to repaint the area of the menupane window that `belongs to' the gadget. And when the button is released, again the menupane widget has to process the event and inform the gadget. Thus some of the functions that would normally be done by a pushbutton widget child cannot be handled by a gadget child, but become the parent's responsibility. This is the reason why the number of composite widgets that can handle gadget children is limited.

Now let us look at the performance implications. Any pushbutton - widget or gadget - needs to know when the point crosses its boundary and enters, or leaves, the button (so it can highlight/un-highlight itself; in a menu, for example). A widget pushbutton handles this by selecting for <EnterNotify> and <LeaveNotify> events, so the server automatically informs the button about these. However, with a gadget, because it has no window, this isn't possible. Instead, the parent widget must keep track of the pointer all the time (using <MotionNotify> events) and constantly check its position to see if it has entered the gadget's area. This continual generation of motion events by the server, transmission across the network, and checking of them by the client, is usually much more resource-consumptive than handling the few enter/leave events required by a widget.

Probably the best advice about gadgets is that before using gadgets extensively, especially in applications run remotely, you should check on your system to see if they really do give you any better performance; if they don't, use widgets because that is easier, more flexible, and less error-prone.

Figure A. Menu, with widget buttons

Figure B. Menu, with gadget buttons.

Figure C. Window hierarchy for menu built with widgets.

Figure D. Window hierarchy for menu built with gadgets.

7.3 The `Intrinsics' lets you manipulate widgets

The Intrinsics provides an object-oriented programming system, and widgets are objects in this system. The Intrinsics contains mechanisms for creating and manipulating widgets, plus a very large collection of utility functions which simplify and standardize writing X applications.

As we said, an important design goal of the Toolkit was that once widgets were written they should be reusable in other applications. The best software technology currently available to do this is object-oriented programming. We already saw in the previous module that widgets are objects, which we manipulate in the Toollut environment. However, the X Toolkit is written in the C language, which is not object-oriented, so the Toolkit itself must provide the object-oriented programming framework.[Footnote:Some of the original X developers have remarked that had C++ been as widespread then as it is now, it would have been chosen as the implementation language because it would have simplified the work considerably. However, at that time choosing C++ would have seriously restricted the adoption of X.]

Because of a widget's object nature, to use it you only need to know its external behaviour (for example, that it is a pushbutton and handles the tasks that you expect of a button) and the `messages' you can send to it, both to set its parameters and to tell it to perform its tasks. You don't have to know how it has been implemented, or how its internal data structures are organized. This in turn means that the applications programmer can treat widgets as black boxes or building blocks, and slot them together without much trouble.

Object orientation is also crucial in making the system fiexible and extensible. For example, a manager widget which lays its children out in neat rows doesn't need to know in detail what type of widget each child is. Each child is `just a widget', and the manager widget knows that to resize the child (to fit in neatly with its neighbours, say) it need only send the child a resize message with the desired dimensions as parameters, rather than having to know anything in detail about the child. So when we develop new widgets (or buy them from somebody else), and use them in our programs, our existing manager widgets will continue to perform their function correctly, and will be able to manage the new widgets properly, even though their author never anticipated the new widget type.

Object orientation also helps in controlling the complexity of programs, in increasing their reliability, and in creating new widgets based on existing ones. It does this by means of classes of objects. It allows new classes of objects to be based on existing ones, inheriting all but specified behaviour, with extra functionality added as well. Ideally, it ought to be possible to subclass widgets for which you don't have the source code, but at the current point of maturity of the Toolkit, this is not often feasible in practice.

Figure A shows the class hierarchy for a selection of Motif widgets. A class which is connected to one above it by a line in the Figure is said to be a subclass of the one above; the class above is said to be a superclass of the one below. Most of the classes near the top of the hierarchy, which define the fundamental characteristics of all objects and widgets, and characteristics specific to Composite, are defined by the MIT Toolkit itself and are used by all other Intrinsics-based toolkits as well as Motif. Thus all widgets in all widget implementations are descended from (subclassed from) the Core class, and manager widgets are subclasses of Composite. However, the XmPrimitive class is specific to Motif; in most other toolkits, primitive widgets are subclassed direct from Core.

Note that in Figure A gadgets separate out into their own distinct class tree at a very early (high-up) stage. This is because they do not have windows associated with them, and so cannot be subclassed from the WindowObj (window object) class, which adds window characteristics to its immediate superclass, RectObj (rectangular objects). Because of this early separation, the gadget classes have to duplicate much of the functionality they would otherwise have inherited from the Primitive widget class.

In the next module we look at the functions the Intrinsics provides.

Figure A. The class hierarchy for the Motif family of widgets.

7.3.1 Functions provided by the Intrinsics

The toolkit Intrinsics provides two broad classes of functions, those for dealing with widgets, and other more general utility functions.

Widget-related operations

The Intrinsics obviously must provide the whole framework and the programming functions necessary to manage widgets throughout their existence. These include creation and destroying, telling container widgets which children they are to manage, and how they are to be laid out, etc. There is also the important basic mechanism of being able to send a message with some arguments to a widget to change some of its parameters dynamically, for example, to change its background colour, or its size, or the text displayed in it, etc. Also included are functions for controlling which children a manager widget is to display, and to pop-up and pop-down menus.

A large part of the Toolkit is devoted to handling events within widgets. This is covered in the next module.

General utility functions

The Intrinsics handles much of the `boilerplate' of writing an X application. Simple functions are provided which hide much of the complexity of the low-level operation. For example, a simple call to initialize the Toolkit will also parse command-line options, read in default settings from various files, open the connection from the application to the server, and perform all the work necessary to interface the application to the window manager. There is support for writing applications that connect to more than one display, which for example you might use in an interactive messaging program, with two users communicating with each other remotely over a network (Figure A). The Intrinsics also supports the Selections Service used to exchange information and data between applications, as in cut-and-paste operations, for example (see Module 9.2.1).

Facilities for easy handling of colours and fonts are embedded in standard functions so that the applications programmer never needs to call them directly. For example, to set the background colour for a pushbutton, the colour value `Red' could be specified in a defaults file (a text file containing default settings for one or more applications.) The toolkit reads this file and automatically knows to convert the string of three ASCII characters R, e and d to a reference to X's colour database, and then convert that value to an internal colour value. This Intrinsics facility to handle default and preference settings is very powerful, but here we'll just mention that it can be used for customizing a program according to personal preference, or to match requirements of particular customers, or localize it for use with different national languages. Almost anything can be customized in this way, from simple colours through to keyboard mappings, to the layout of various items within the application. Chapter 12 deals in depth with customizing applications, using the Toolkit's so-called `Resources' mechanism.

As a result of all this, many applications programmers can completely avoid the details of low-level Xlib programming, and work entirely with Toolkit functions. Toolkit programming is, in general, much easier than Xlib programming.

Figure A. The Toolkit supports connecting a single client to many displays.

7.4 Event handling and `callbacks'

The Intrinsics contains code to handle events in widgets and to make eventdriven programming easier. The Intrinsics handles the low-level X events and transforms them into high-level logical occurrences called `callbacks'. This allows the programmer to specify in a simple way that a particular procedure in the application is to be called whenever a change occurs in a particular component in the user interface.

(If technical details aren't so relevant to you, you can skip over this module.)

As we described in Module 5.2.1, an X applieation is event-driven and must be prepared to process events that may occur in any part of the user interface. With Xlib programming, the applications programmer has to specify explicitly which events are to be watched for in which window, and then handle each of these mone or less on a one-by-one basis.

The Intrinsics minimizes this complexity by processing all events for all widget windows itself. It maps low-level X events into high-level `logical events' or `callbacks' which are much easier for the applications programmer to handle, because a lot of the trivial programming details have been avoided. For example, the Motif pushbutton defines the ACtivate callback, which is invoked whenever you `press' the pushbutton. For each callbacklwidget combination, the programmer can specify (or register) a proeedure (a callback procedure) which is to be executed whenever this logical event occurs. The callbaek procedure is written by the applications programmer and it usually carnes out some application-related functionality; for example, the callback procedure for a Save button in a File pull-down menu might save the current contents of the application's window to a file on disk. How a callback procedure is registered and later invoked, is shown schematically in Figure A. If no procedure is specified for a particular callback, no special action is taken, so the programmer only has to handle events or callbacks which are specifically of interest.

Each type of widget defines its own set of callbacks to suit the type of actions that it is designed to perform. For example, a scrollbar might have a valueChanged callback, called whenever the position of the scrollbar (and therefore the value it represents) changes. A widget for text input might have a modifyVerify callback to allow the programmer to check that text typed by the user is valid, before it is actually entered into and displayed in the text widget, to allow for validated forms-based input.

The mapping of which X event (or sequence of events) is to cause a particular callback to occur in a widget is defined by a translation table. As we shall see in Module 12.4, you can modify these tables, and so customize the keyboard and input characteristics of your application.

Some events are processed automatically by the Toolkit and don't need to be passed to the application part of the program at all, so no callbacks are defined for them. For example, as we described in Module 6.4.1, exposure events for almost all widgets are handled by the Toolkit itself beeause it knows what it drew in the widgets window and so can redraw the contents itself instead of calling a programmer-specified procedure.

Explicit exposure callbacks are only provided for widgets that contain graphics drawn directly by the application and not via the Toolkit, and which have to be redrawn by a programmer-specified application function.

In the next module we look at the relationship of callbacks and events in more detail, and at non-user interface `events' in the module after that.

Figure A. Intrinsics event and callback handling.

7.4.1 Callbacks simplify event-driven programming

The internal processing of events, even for a simple component like a pushbutton, can be very complex. Callbacks simplify this by removing the need for the applications programmer to deal with low-level events at all.

(If technical details aren 't so relevant to you, you can skip over this module.)

The best way to illustrate the idea of callbacks is to use an example - the pushbutton widget. At first glance, the event handling for a pushbutton is easy: when you receive a <ButtonPress> event, the program should invoke the action related to that button - for example, save the file being worked on, by calling the function save_myfile() written by the programmer. But many buttons are more complicated, to make them easier and safer to use. When you press on an Athena pushbutton, it reverses colour to show that it has been pressed; the button is now `set' and only when you release it again is its action to be invoked. More precisely, only if you release the mouse with the pointer still inside the button will it be activated; if you move the pointer out of the button before releasing the mouse button, the pushbutton `resets' - goes back to its normal state - and no action is invoked. To make things more complicated, what if the program is being used on a machine that doesn't have a mouse, and the pushbutton has to be invoked from the keyboard?

So, event handling really is quite complex; we need to process < ButtonPress > and < ButtonRelease > events, and be prepared to notice < Leave > events in case the pointer is moved out of the button (and <Enter> events in case it comes back in again). We have to process keyboard events to allow activation from the keyboard, but only invoke the action if the user presses the Space or Enter key.

Imagine having to write all that code for each button in your application! What the Intrinsics does to get over this is define a set of callbacks for the button, to represent the high-level occurrences that you are interested in, and let you specify one or more functions in your program whieh are to be executed whenever these occur. The Intrinsics handles the low-level events and notices when these indicate that the callback should be executed. For example, the Athena pushbutton widget defines the notify callback, which is called when the button is activated. Thus, all the applications programmer has to do, having created the button widget, is call a subroutine once to tell the Intrinsics that the function save_myfile() is to be executed whenever the notify callback occurs - on other words, register save_myfile() as the callback procedure. After that, the Intrinsics manages all the events and automatically invokes the callback functions as required. A code fragment illustrating this is shown in Figure A.

You can register several callback procedures for a single callback. When the callback is invoked, these procedures are executed in the order they were registered.

Even if the programmer doesn't specify a callback procedure, when a callback occurs the widget may still perform many functions. However, these typically relate only to the widget itself or its near relatives, performing internal `housekeeping' tasks rather than application-defined functions. For example, with a pushbutton, even if you haven't specified a callback procedure, when you push the button it `moves in' and reverses its colour; when you release it again, it springs back again and reverts to its normal colour. Similarly, if you select from a pull-down menu an item that has no callback procedure, the button changes colour as usual, and causes the menu to disappear. In Module 12.4 we cover actions linked to callbacks in more detail.

Figure A. Using a callback procedure in a program

7.4.2 Non-X callbacks - file input, and timers

The Toolkit provides three special types of callback to handle occurrences outside the user interface. `Alternate input sources' take input from a file or a serial line or another process. `Timers' let you execute a procedure at a specified time in the future, and `workprocs' let you do useful background work when the Toolkit would otherwise be idle.

The mechanisms we described in previous modules all relate to events and input coming from the user interface - from some type of X component. However, some programs need to process input that does not originate within X. In UNIX the input is read from a file descriptor, so it can in fact be from a normal file (which is rarely used - normal file processing is usually adequate) or from a hardware input port such as a serial line, or from another local or remote process. Let us look at some examples of these.

Many X applications read input from other programs. Terminal emulators need to read the input from the program they are running in the emulator window. The MIT program xconsole takes the input that would normally be printed on the workstation's console screen and displays it in a window, instead. And often, X is used to provide a graphical front end to an existing application running in the background, to make it easier to use; these front end programs need to input whatever the background application prints as its output.

Reading from hardware devices and serial lines can be used to add some types of new input device. For example, a dial for specifying how on-screen objects are to be rotated can be integrated this way. (However, this makes the device available only within this single toolkit application; integrating a new device to act as the main pointer would require the Input server extension of Module 3.8.) Similarly, the output of instruments, perhaps connected via an analogue to digital convertor on a serial line, can be processed.

Using the Toolkit, you handle these inputs in a uniform way just like normal callbacks, by registering them as alternate input sources using a specific Toolkit function, and specifying a callback function to be executed whenever input is available to be read on the particular file descriptor. Within the callback function, a read is usually performed to retrieve the available data, which is then processed according to the application's requirements.

Timers

The Toolkit allows you to register a timer callback function which is to be executed after a specified time interval. These are very like normal callbacks, but they are initiated by time elapsing rather than window or user-input events.

Typical applications of this type of function are alarms, or to update some display regularly (for example, update the hands on a clock every minute or every second), or to check the status of something regularly (for example, e-mail programs typically look in your mailbox every so often to see if any new mail has arnved). Many Toolkit programs also use timers with very short intervals, to make items flash on screen: for example, the flashing outline in xmag, and editres's flashing of widgets. These timer callbacks function by drawing or displaying the items in the reverse of the colour they are currently drawn in, each time the callback function is executed.

Background work procedures

When the Intrinsics is in its main event-loop waiting for an event to arrive, it is doing nothing. You can register a work procedure or workproc which is to be executed when the Intrinsics has nothing else to do, to make use of spare CPU cycles. Again, this procedure is very similar to a callback procedure, but is triggered for different reasons.

If the workproc returns the value False when it completes, it remains registered, and will be executed again when no events are pending; if it returns True, indicating the task is completed, the Intrinsics unregisters it and removes it. This lets the programmer control the use of the workproc, typically leaving it in place doing a small piece of its task each time it is called, only returning True when there's nothing left to do.

Because events that arnve while a workproc is executing are queued until it has finished, workprocs should be written so that they only take a short time to execute if they are not to impair the application's interative response. They should do a little work often, rather than a lot at one time.

Workprocs are often used to save time by creating the widgets within pop-up menus and dialogs before they are actually required, while and when the application is waiting, doing nothing else. When the dialog is finally required, it can be displayed quickly because there is no delay while its widgets are created, so the response of the program is improved. Workprocs are also used to update part of the user interface which can be allowed to get a little bit out of date. For example, in a DTP system or text editor, updating or scrolling the image on screen can be done by a workproc. The result is that keeping the image current has less priority than reading and processing user input; this is a reasonable strategy because what the user is typing may render obsolete the image that the program was about to re-display. For example, if you have moved to the middle of a document, completing the scrolling and painting of the current page is a waste of time if the user has subsequently just pressed the goto-end-of-file key.

Another major use of workprocs is performing large computations, while still allowing the application to respond to user input (see Module 14.1.2).

7.5 Some implementations of Motif and OPEN LOOK use Xt

Most Motif implementations and some OPEN LOOK ones use the standard Intrinsics. These `toolkits' are really widget sets, with some extra functions, specific to the look and feel, provided in additional libraries.

The principal implementation of the Motif `standard' look and feel - the source code distributed by the Open Software Foundation - is Intrinsics-based. Most Motif development and user systems, from both hardware vendors and third-party suppliers, are founded on this released code. There are some other systems that provide the Motif look and feel which do not use this code as a basis. Of these, most are special toolkits that aim to provide GUI-independent programming, where the programmer can write one source program which when compiled on different systems (typically Microsoft Windows, Macintosh, and UNIX with Motif) will use the native GUI of that system. (See Module 7.6 for more detail.)

The OPEN LOOK GUI is also available as an Intrinsics-based toolkit, called Xt+ or OLIT. However, its most common implementation is probably the one called Xview, which was specifically designed to simplify porting applications to X from Sun's proprietary SunView system; Xview doesn't use the Intrinsics. Figure A shows how the same look and feel is provided by two very different internal program structures.

The fact that very different GUls can be based on the same toolkit Intrinsics shows how flexible the X toolkit mechanism is. It also means that even if an organization ehooses to adopt one particular look and feel, Motif, say, all the programming techniques and skills gained by programmers are equally applicable to Motif and OPEN LOOK, because they are provided by the Intrinsics. So, later on, migrating to or porting to OPEN LOOK would not be a difficult task (although it would be long and tedious, because all the widget names, the widget parameters, and how they interact are different).

In the case of the Intrinsics-based toolkits, what the toolkit vendor is really providing is a set of widgets, because most of the basic functionality is provided by the standard system. Vendors do supply some extra libraries of functions as well, to augment the standard Intrinsics functions. Many of these are simple convenience functions - just `wrappers' around standard Intrinsics functions. For example, Motif provides a speeial function to create a pushbutton window, but it is just as easy to call the standard Intrinsics widget-create function with an argument specifying that the widget type is to be a pushbutton. However, other functions perform more complex tasks. For example, a menu creation function not only creates the basic widget that forms the menu pane, but also configures it for correct operation - so that all the menu items are aligned vertically (for a pull-down menu) and the event processing is correctly initialized to allow the menu to be invoked, and items selected from it, via the keyboard.

The toolkits also provide enhanced support for communication between applications. Motif provides a clipboard for cutting and pasting. OPEN LOOK includes support for drag and drop - selecting an item in one program, and communicating this selection to another application or a different part of the same one, by dragging the item with the mouse pointer, and dropping it onto some other appropriate object. (For example, you might print a file by dragging its icon and dropping it onto a printer's icon.) Other features included are support for localization of applications, so that one application may be produced which is suitable for use with many different national languages and character sets. (See also Module 5.3.)

Toolkits providing both Motif and OPEN LOOK look and feel

Many software developers, especially those developing software for sale rather than in-house use, don't want to choose just Motif or just OPEN LOOK - some of their customers will want an OPEN LOOK product, whereas others will have standardized on Motif and will want a Motif version. Intrinsics-based toolkits are now coming available which let you write your program using `generic' user interface components (for example MooLIT from UNIX System Laboratories). You specify just a scrollbar or a pushbutton, not a Motif scrollbar or an OPEN LOOK pushbutton, and compile your program with the Intrinsics and `dual-look and feel' toolkit libraries, giving you a single executable (binary) program. Then at run time you specify which look and feel you want to use. Thus you can have two instances of an application running on the same CPU from the same program-file on disk, one offering the OPEN LOOK look and feel, the other Motif.

Similar facilities are provided by several non-Intrinsics toolkits (Module 7.6).

Figure A. The same OPEN LOOK look and feel is offered by two very different toolkits.

7.5.1 Motif's User Interface Language - UIL

The standard Motif system includes a special language for specifying the user interface of the application. This is called UIL - the User Interface Language. You specify in a text file the tree-like hierarchy of widgets your application needs, and compile it separately from your main program. The main program interprets this file at run time using special Motif functions. This lets you alter the user interface easily, changing only the UIL file, without having to change, or even recompile or relink, the main program.

UIL is a special-purpose language for describing a hierarchy of widgets in an application, plus the various parameters for each widget, such as position, size, text to be displayed in labels, menu composition, etc. You write the widget descriptions in a text file, called a .uil file. Then, using a special UIL compiler, you convert this text file into a .uid file.

Separately, you write and compile and link your main program (Figure A) but you omit all the code you would normally include to create widgets, position them, specify their parameters, etc. Instead, you include a few special functions (`Motif Resource Manager' or MRM functions) provided in a Motif library; at run time these functions read in the already prepared.uid file, and dynamically create and configure the widgets specified in that file. (Figure B)

The advantages of this over programming the user interface by hand, using C calls, are:

While UIL makes it easy to specify the layout and appearance of your program's user interface, it only allows you to provide a static (fixed) description of the interface. It is really a layout specification, and UIL has no programming facilities for specifying the time-dependent (dynamic) aspects of the interface. For example, you cannot write some UIL code to perform a function like `if the user pressed pushbutton A, then call up dialog B, but if they pressed button C as well, don't show options X, Y and Z on the dialog, and expand widget W'. You have to program sequences such as this in your main program, probably with callback procedures.

Many X interface development systems and program generators output UIL code from which the final program is built.

This concludes our look at the X standard toolkit facilities. In the next module we look at some other toolkits that are not based on the Inttinsics.

Figure A. Using UIL.

Figure B. At run time the Motif program reads in the user interface description from the .uid file.

7.6 Other X toolkits

Many toolkits do not use the Intrinsics at all, often because they are implemented in a language other than C. Some of these provide the same look and feel as Intrinsics-based toolkits.

The Intrinsics is the X standard mechanism for building toolkits, but it is not the only mechanism. There are other toolkits which do not use the Intrinsics, for many reasons listed below. The schematic structure of these is contrasted with a standard Intrinsicsbased application, and one using only Xlib functions, in Figure A.

As we mentioned previously in Module 6.4, the fact that an application has a particular look and feel does not necessarily mean that it was implemented with a particular toolkit. Remember, look and feel is the appearance and external behaviour of a program: internally, an application can provide the same look and feel in many different ways, and there are many common examples of this:

And of course with toolkits such as OI which we mentioned above, and the dual-interface toolkits of Module 7.5, the reverse is the case: a single internal program structure is capable of providing more than one distinct look and feel.

Figure A. Schematic structure of applications based on Xlib Intrinsics, and other toolkits.


Summary

In this chapter we have looked at how X supports user interface toolkits

We saw that most applications use the standard Xt Toolkit, which consists of a basic object-oriented framework provided by the Intrinsics, with widgets created using this framework. Primitive widgets are typically single-purpose user interface components, while composite widgets are containers to manage the layout of other widgets, for example in menus. The Toolkit offloads most of the event-handling complexity from the application programmer, and by means of callbacks simplifies integrating the user interface with the application-related code. We also looked briefly at non-Intrinsics toolkits.

In fact we have dealt with only one part of the overall user interface the user sees - the part relating to how you interact with the graphical interface components within a particular application which we call the `application interface'.

The other part of the total user interface is how you control the layout of your screen as a whole - how you position one application's window relative to another, how you move and resize the applications on your screen, and how you move control from one application to another. In the next chapter we move on to look at this other major part of the whole user interface - the `management interface', which is determined by the window manager.