Analysis of information sources in references of the Wikipedia article "Future与promise" in Chinese language version.
In this paper we consider an "eager beaver" evaluator for an applicative programming language which starts evaluating every subexpression as soon as possible, and in parallel. This is done through the mechanism of futures, which are roughly Algol-60 "thunks" which have their own evaluator process ("thinks"?). (Friedman and Wise [10] call futures "promises", while Hibbard [13] calls them "eventuals".) When an expression is given to the evaluator by the user, a future for that expression is returned which is a promise to deliver the value of that expression at some later time, if the expression has a value. A process is created for each new future which immediately starts to work evaluating the given expression. ……
The intuitive semantics associated with a future is that it runs asynchronously with its parent's evaluation. This effect can be achieved by either assigning a different processor to each future, or by multiplexing all of the futures on a few processors. Given one such implementation, the language can easily be extended with a construct having the following form: "(EITHER <e1> <e2> … <en>)
" means evaluate the expressions<ei>
in parallel and return the value of "the first one that finishes".
Promises were designed to support an efficient asynchronous remote procedure call mechanism for use by components of a distributed program. A promise is a place holder for a value that will exist in the future. It is created at the time a call is made. The call computes the value of the promise, running in parallel with the program that made the call. When it completes, its results are stored in the promise and can then be “claimed” by the caller. ……Also published in ACM SIGPLAN Notices 23(7).
Call-streams allow a sender to make a sequence of calls to a receiver without waiting for replies. The stream guarantees that the calls will be delivered to the receiver in the order they were made and that the replies from the receiver will be delivered to the sender in call order. ……
The design of promises was influenced by the future mechanism of MultiLisp[5]. Like futures, promises allow the result of a call to be picked up later. However, promises extend futures in several ways: Promises are strongly typed and thus avoid the need for runtime checking to distinguish them from ordinary values. They allow exceptions from the called procedure to be propagated in a convenient manner. Finally, they are integrated with the call-stream mechanism and address problems such as node failures and network partitions that do not arise in a single-machine environment. ……
There are two reasons for using stream calls instead of RPCs: they allow the caller to run in parallel with the sending and processing of the call, and they reduce the cost of transmitting the call and reply messages. RPCs and their replies are sent over the network immediately, to minimize the delay for a call. Stream calls and their replies, however, are buffered and sent when convenient; in the case of sends, normal replies can be omitted. Buffering allows us to amortize the overhead of kernel calls and the transmission delays for messages over several calls, especially for small calls and replies.
Concurrent logic languages are born from a new interpretation of Horn clauses, the process interpretation. According to this interpretation, an atomicgoal ← C
can be viewed as a process, a conjunctivegoal ← C1, …, Cn
, as a process network, and a logic variable shared between two clauses can be viewed as a communication channel between two processes.
Multilisp's principal construct for both creating tasks and synchronizing among them is the future. The construct(future X)
immediately returns a future for the value of the expressionX
and concurrently begins evaluatingX
. When the evaluation ofX
yields a value, that value replaces the future. The future is said to be initially undetermined; it becomes determined when its value has been computed. An operation (such as addition) that needs to know the value of an undetermined future will be suspended until the future becomes determined, but many operations, such as assignment and parameter passing, do not need to know anything about the values of their operands and may be performed quite comfortably on undetermined futures. ……
The future construct in Multilisp, for example, offers a way to introduce parallelism that fits very nicely with the principal program operation of Multilisp — expression evaluation. Also, no special care is required to use a value generated by future. Synchronization between the producer and the users of a future’s value is implicit, freeing the programmer’s mind from a possible source of concern.
Promises were designed to support an efficient asynchronous remote procedure call mechanism for use by components of a distributed program. A promise is a place holder for a value that will exist in the future. It is created at the time a call is made. The call computes the value of the promise, running in parallel with the program that made the call. When it completes, its results are stored in the promise and can then be “claimed” by the caller. ……Also published in ACM SIGPLAN Notices 23(7).
Call-streams allow a sender to make a sequence of calls to a receiver without waiting for replies. The stream guarantees that the calls will be delivered to the receiver in the order they were made and that the replies from the receiver will be delivered to the sender in call order. ……
The design of promises was influenced by the future mechanism of MultiLisp[5]. Like futures, promises allow the result of a call to be picked up later. However, promises extend futures in several ways: Promises are strongly typed and thus avoid the need for runtime checking to distinguish them from ordinary values. They allow exceptions from the called procedure to be propagated in a convenient manner. Finally, they are integrated with the call-stream mechanism and address problems such as node failures and network partitions that do not arise in a single-machine environment. ……
There are two reasons for using stream calls instead of RPCs: they allow the caller to run in parallel with the sending and processing of the call, and they reduce the cost of transmitting the call and reply messages. RPCs and their replies are sent over the network immediately, to minimize the delay for a call. Stream calls and their replies, however, are buffered and sent when convenient; in the case of sends, normal replies can be omitted. Buffering allows us to amortize the overhead of kernel calls and the transmission delays for messages over several calls, especially for small calls and replies.
The proposed design of this module was heavily influenced by the Javajava.util.concurrent
package [1]. The conceptual basis of the module, as in Java, is theFuture
class, which represents the progress and result of an asynchronous computation. TheFuture
class makes little commitment to the evaluation mode being used e.g. it can be used to represent lazy or eager evaluation, for evaluation using threads, processes or remote procedure call.
Futures are created by concrete implementations of theExecutor
class (calledExecutorService
in Java). The reference implementation provides classes that use either a process or a thread pool to eagerly evaluate computations.
This paper describes the asynchronous support in F# 2.0. While the core idea was released and published in book form 2007, the model has not been described in the conference literature.
Multilisp's principal construct for both creating tasks and synchronizing among them is the future. The construct(future X)
immediately returns a future for the value of the expressionX
and concurrently begins evaluatingX
. When the evaluation ofX
yields a value, that value replaces the future. The future is said to be initially undetermined; it becomes determined when its value has been computed. An operation (such as addition) that needs to know the value of an undetermined future will be suspended until the future becomes determined, but many operations, such as assignment and parameter passing, do not need to know anything about the values of their operands and may be performed quite comfortably on undetermined futures. ……
The future construct in Multilisp, for example, offers a way to introduce parallelism that fits very nicely with the principal program operation of Multilisp — expression evaluation. Also, no special care is required to use a value generated by future. Synchronization between the producer and the users of a future’s value is implicit, freeing the programmer’s mind from a possible source of concern.
A component of a data structure may have I-structure semantics (as opposed to functional or M-structure semantics). For such a component, no value is specified when the data structure is created; instead, a separate assignment statement is used. An I-structure component may be in one of two states: be full (with a value), or empty. all components begin in the empty state (when the data structure is allocated).
An I-structure component has a single assignment restriction, i.e. it can only be assigned once, at which point its state goes from empty to full. Any attempt to assign it more than once is caught as a runtime error. The component can be read an arbitrary number of times.
……
An M-structure component can be assigned with a put operation and read with a take operation. A value can be put only into an empty component — it is a runtime error if it is already full. Many take's may be attempted concurrently on a component.
Currently, you can run the operation on a background thread or using aTask
, but coordinating multiple such operations is difficult. …… This problem has been the main motivation for including asynchronous workflows in F# about 3 years ago. In F#, this also enabled various interesting programming styles - for example creating GUI using asynchronous workflows ……. The C# asynchronous programming support and theawait
keyword is largely inspired by F# asynchronous workflows …….
TheSyncVar
structure provides Id-style synchronous variables (or memory cells). These variables have two states: empty and full. An attempt to read a value from an empty variable blocks the calling thread until there is a value available. An attempt to put a value into a variable that is full results in thePut
exception being raised. There are two kinds of synchronous variables: I-variables are write-once, while M-variables are mutable.
As base language we choose a dynamically type language DML that is obtained from SML by eliminating type declarations and static type checking. ……
A state is a finite function σ mapping addresses a to so-called units u. Units are either primitive values other than names or representations of records, variants, reference cells, functions, and primitive operations …… A match Match is a sequence of clauses(p1 ⇒ e1 | … | pk ⇒ ek)
. ……
We now extend DML with logic variables, one of the essentials of logic programming. Logic variables are a means to represent in a state partial information about the values of addresses. Logic variables are modelled with a new unitlvar
. The definition of states is extended so that a state may map an address also tolvar
or an address. ……
A match …… blocks until the store contains enough information to commit to one of the clauses or to know that none applies.
General logic variables stem from the class of logic programming languages such as Prolog [Pro85, SS94, JL87]. Initially, when freshly introduced, they carry no value. Therefore they allow for the stepwise construction of values, using further logic variables for the construction of subvalues if necessary. They are transient, in that they are identified with their value as soon as this becomes available. This provides a mechanism for implicit synchronization of concurrent threads that share a logic variable: A thread reading the variable automatically suspends while sufficient information is not available.
We will be concerned with futures and promises, which differ from general logic variables in that a distinction is made between reading and writing them. Bidirectional unification can be replaced by (single-) assignment.
In this paper we consider an "eager beaver" evaluator for an applicative programming language which starts evaluating every subexpression as soon as possible, and in parallel. This is done through the mechanism of futures, which are roughly Algol-60 "thunks" which have their own evaluator process ("thinks"?). (Friedman and Wise [10] call futures "promises", while Hibbard [13] calls them "eventuals".) When an expression is given to the evaluator by the user, a future for that expression is returned which is a promise to deliver the value of that expression at some later time, if the expression has a value. A process is created for each new future which immediately starts to work evaluating the given expression. ……
The intuitive semantics associated with a future is that it runs asynchronously with its parent's evaluation. This effect can be achieved by either assigning a different processor to each future, or by multiplexing all of the futures on a few processors. Given one such implementation, the language can easily be extended with a construct having the following form: "(EITHER <e1> <e2> … <en>)
" means evaluate the expressions<ei>
in parallel and return the value of "the first one that finishes".
Promises were designed to support an efficient asynchronous remote procedure call mechanism for use by components of a distributed program. A promise is a place holder for a value that will exist in the future. It is created at the time a call is made. The call computes the value of the promise, running in parallel with the program that made the call. When it completes, its results are stored in the promise and can then be “claimed” by the caller. ……Also published in ACM SIGPLAN Notices 23(7).
Call-streams allow a sender to make a sequence of calls to a receiver without waiting for replies. The stream guarantees that the calls will be delivered to the receiver in the order they were made and that the replies from the receiver will be delivered to the sender in call order. ……
The design of promises was influenced by the future mechanism of MultiLisp[5]. Like futures, promises allow the result of a call to be picked up later. However, promises extend futures in several ways: Promises are strongly typed and thus avoid the need for runtime checking to distinguish them from ordinary values. They allow exceptions from the called procedure to be propagated in a convenient manner. Finally, they are integrated with the call-stream mechanism and address problems such as node failures and network partitions that do not arise in a single-machine environment. ……
There are two reasons for using stream calls instead of RPCs: they allow the caller to run in parallel with the sending and processing of the call, and they reduce the cost of transmitting the call and reply messages. RPCs and their replies are sent over the network immediately, to minimize the delay for a call. Stream calls and their replies, however, are buffered and sent when convenient; in the case of sends, normal replies can be omitted. Buffering allows us to amortize the overhead of kernel calls and the transmission delays for messages over several calls, especially for small calls and replies.
Concurrent logic languages are born from a new interpretation of Horn clauses, the process interpretation. According to this interpretation, an atomicgoal ← C
can be viewed as a process, a conjunctivegoal ← C1, …, Cn
, as a process network, and a logic variable shared between two clauses can be viewed as a communication channel between two processes.
As base language we choose a dynamically type language DML that is obtained from SML by eliminating type declarations and static type checking. ……
A state is a finite function σ mapping addresses a to so-called units u. Units are either primitive values other than names or representations of records, variants, reference cells, functions, and primitive operations …… A match Match is a sequence of clauses(p1 ⇒ e1 | … | pk ⇒ ek)
. ……
We now extend DML with logic variables, one of the essentials of logic programming. Logic variables are a means to represent in a state partial information about the values of addresses. Logic variables are modelled with a new unitlvar
. The definition of states is extended so that a state may map an address also tolvar
or an address. ……
A match …… blocks until the store contains enough information to commit to one of the clauses or to know that none applies.
General logic variables stem from the class of logic programming languages such as Prolog [Pro85, SS94, JL87]. Initially, when freshly introduced, they carry no value. Therefore they allow for the stepwise construction of values, using further logic variables for the construction of subvalues if necessary. They are transient, in that they are identified with their value as soon as this becomes available. This provides a mechanism for implicit synchronization of concurrent threads that share a logic variable: A thread reading the variable automatically suspends while sufficient information is not available.
We will be concerned with futures and promises, which differ from general logic variables in that a distinction is made between reading and writing them. Bidirectional unification can be replaced by (single-) assignment.
A component of a data structure may have I-structure semantics (as opposed to functional or M-structure semantics). For such a component, no value is specified when the data structure is created; instead, a separate assignment statement is used. An I-structure component may be in one of two states: be full (with a value), or empty. all components begin in the empty state (when the data structure is allocated).
An I-structure component has a single assignment restriction, i.e. it can only be assigned once, at which point its state goes from empty to full. Any attempt to assign it more than once is caught as a runtime error. The component can be read an arbitrary number of times.
……
An M-structure component can be assigned with a put operation and read with a take operation. A value can be put only into an empty component — it is a runtime error if it is already full. Many take's may be attempted concurrently on a component.
TheSyncVar
structure provides Id-style synchronous variables (or memory cells). These variables have two states: empty and full. An attempt to read a value from an empty variable blocks the calling thread until there is a value available. An attempt to put a value into a variable that is full results in thePut
exception being raised. There are two kinds of synchronous variables: I-variables are write-once, while M-variables are mutable.
Currently, you can run the operation on a background thread or using aTask
, but coordinating multiple such operations is difficult. …… This problem has been the main motivation for including asynchronous workflows in F# about 3 years ago. In F#, this also enabled various interesting programming styles - for example creating GUI using asynchronous workflows ……. The C# asynchronous programming support and theawait
keyword is largely inspired by F# asynchronous workflows …….
The proposed design of this module was heavily influenced by the Javajava.util.concurrent
package [1]. The conceptual basis of the module, as in Java, is theFuture
class, which represents the progress and result of an asynchronous computation. TheFuture
class makes little commitment to the evaluation mode being used e.g. it can be used to represent lazy or eager evaluation, for evaluation using threads, processes or remote procedure call.
Futures are created by concrete implementations of theExecutor
class (calledExecutorService
in Java). The reference implementation provides classes that use either a process or a thread pool to eagerly evaluate computations.