<<< REPORT 1. >>> "SOCIABLE COMPUTERS, EMOTIONS AND COMMON SENSE." JACOB MARSCHAK INTERDISCIPLINARY COLLOQUIUM Marvin Minsky Massachussets Institute of Technology ABSTRACT: You could say that this talk was intentionally disorganized. Minsky proved to be a very entertaining and edifying man, quick to voice and defend his opinions and able to entertain the crowd with his tricks of overhead manipulation. SUMMARY: In this talk Marvin Minsky covered a wide variety of topics: He asserted that most of the attributes routinely attributed to humans while at the same time being beyond the capacity of machines, such as the emotions and self-awareness, are in fact attainable by both. He further claimed that most of the arguments presented in refutation of this assertion, such as the Chinese room thought experiment, in fact simply beg the question, forcing us to posit the existence of some homunculus in which the same issues arise. He noted that it might be more productive for current programmers to focus their efforts on improving the languages and systems with which they work, pointing out in passing the futility of trying to write a program without giving it some understanding of what it is trying to do. He mentioned that his preferred science fiction genre, weak characterizations in conjunction with strong ideas, is currently out of vogue and that his presentation a day previously at the Getty Institute was a "fiasco". He commented on the difference between the situational scripts, in which the program tries to fulfil a goal using its current situation as a basis for deciding which actions should be taken, which he thought of, and the diffential scripts, in which the program uses the difference between its current situation and the its goal as a basis for deciding which actions to take, which he wishes he thought of. In reply to a question by Mike Dyer he discussed the different types of which a man's consciousness may play continuity tricks, and how this relates to the way in which awareness might actually be implemented. <<< REPORT 2. >>> "COGNITIVE COMPUTATION" CSD Distinguished Lecture Series Leslie G. Valiant Harvard University ABSTRACT: This talk, while providing a nice overview of some of the ways in which logic- and learning- based methodologies relate to each other, could have benefitted from some more implementation specific details. SUMMARY: In this talk Leslie G. Valiant discussed the differences between logic-based and learning-based systems, where logic-based systems are those that are driven by analytic reasoning, for example, deductive and inductive expert systems, and learning-based systems are those based on the classification of patterns using examples as input, for example neural networks. In particular, he pointed out how the early success of the logic-based systems might even lead one to later believe that such systems comprised the whole of computer science. However, when one considers, for example, the classic Turing test, it is apparently a test of learning. Turing himself, in the conclusion of his paper, noted that he could probably code a program that would pass his test in 50 years, give or take, and that the correct way to pass the test would be to write a program that would learn how to pass the test. It is by using learning as a basis, that learing-based systems will be able to incorporate the following qualities: - Robustness. - Incomplete information. - Non-monotonic decision making processes. - Contradictory input. Which are also precisely those qualities which pose the greatest difficulty for logic-based systems. <<< REPORT 3. >>> "THE MBONE: ENABLING LIVE, MULTIPARTY, MULTIMEDIA COMMUNICATION ON THE INTERNET" CS201 Lecture Steve Deering Cisco Systems ABSTRACT: This talk gave a nice overview of the history, implementation, and future of the Multicast Backbone (MBone). The nefarious goals of the brain-dead telcos were also hinted at. SUMMARY: In this talk Steve Deering discussed the explosive growth that the MBone has undergone. This growth, with occassional growth hormone assist from events such as the Rolling Stones mutlicast, has been essentially exponential and continues to be so. The essential idea behind the MBone is to provide a standard low level protocol for the broadcast of data (i.e. one to many communications) that can then be used by any application with such a need. These applications currently include teleconferencing, whiteboarding, voice and video broadcast, and geophysical data. Unfortunately, the explosive growth of the MBone has led to scaling problems, primarily related to the essentially flat space in which its routing protocols operate and the unwillingness of ISP's to support these services except through tunnels. These problems are currently areas of active research, as are the problems relating to bandwidth and reliability as demand grows. This has also led to increasing commercialization, as routers begin to incorporate MBone support. The real question is whether or not the implementation of private application specific protocols will overtake the standardized MBone protocol. This would be unfortunate as one of its main strengths is its universality. <<< REPORT 4. >>> "ULTRASPARC-II: SUN'S SECOND GENERATION 64-BIT PROCESSOR WITH MULTIMEDIA SUPPORT" CS201 Lecture Dr. Marc Tremblay Sun Microelectronics ABSTRACT: This talk covered the capabilities of SUN's 64 bit UltraSPARC processors, with particular emphasis on those instructions dedicated to multimedia applications, followed by a brief overview of SUN's proposed Java chips. SUMMARY: In this talk Marc Tremblay discussed the development of the UltraSPARC processor family. Register file windows, nine state pipeline, large L2 (4MB) cache, 64 bit adder, 16KB L1 data cache, 96KB L1 instruction cache, prefetch, together make for a very fast chip. The emphasis of the talk was on the addition of multimedia support to the instruction set, essentially consisting of a set of SIMD pixel operations chosen after an analysis of actual multimedia applications. This allows for real time decoding of a 720x480 MPEG-2 stream at 30 fps. These operations consist of 4 pixel adds, multiplies, compares, expands, packing, etc. and allow for accelerated operation in RGBAZ colorspace. A question was raised as to whether or not the alpha channel really required 8 bits. In addition, someone wanted to know who was responsible for the asymmetric 31x23x15 bit allocation of RGB. The Java processors were only mentioned in passing at the end. Several families are proposed, with the main differentiation between them being performance, which is tuned to fit its intended function (e.g. network computer, wireless client, etc.). <<< REPORT 5. >>> "SYSTEM-LEVEL SYNTHESIS OF APPLICATION SPECIFIC SYSTEMS" CS201 Lecture Miodrag Potkonjak Department of Computer Science University of California, Los Angeles ABSTRACT: This talk covered the advantages of the system-level synthesis design process, with the emphasis being on how it is possible, using optimizations at the algorithmic level, to achieve far greater gains in speed, area, and power consumption than is possible at the implementation level. SUMMARY: The discussed design process can be summarized as: - Application: application analysis - Algorithm: design selection - Architecture: structural properties - Behavioral synthesis: transformations, estimations, new design goals, analysis - Logic synthesis/physical design: design work flow Many applications (DSP, video servers, wireless, ATM, embedded controllers, etc.) typically require embedded controllers to provide their functionality. These embedded controllers typically have very stringent power and speed requirements that can only be met through aggressive optimization and custom logic. System-level synthesis, in particular, the matching of application to algorithm, allows for far greater optimization than is seen in optimizations at later points in the design process, where the performance metric might already be at some merely local minimum in the global design space. This was demonstrated in the design of a mobile computing chip for a joint UCLA CS/EE project, in which algorithmic optimization proved far better than competent VLSI designers at achieving low power consumption and meeting specifications. <<< REPORT 6. >>> "A NOVEL SELF-ORGANIZING NEURAL NETWORK FOR COMBINATORIAL OPTIMIZATION AND COMPUTER VISION" CS201 Lecture Soheil Shams Senior Research Staff Member Hughes Research Laboratories ABSTRACT: This talk covered some problems and solutions in image processing. In particular, the application of elastic band neural networks to different kinds of target tracking and image recognition were discussed. SUMMARY: The application of Multiple Elastic Modules (MEMs) to target tracking and pattern recognition was discussed. These are a particular type of neural network that share the properties of being inherently parallel, incomplete/noisy input tolerant, and fault tolerant. The current state of optimization and pattern recognition algorithms includes: - integer programming - genetic alogrithms - conventional Hopfield neural nets - multiple elastic modules - lagrangian relaxation The two application contexts in which MEMs were discussed were: - target deghosting, in which multiple radars are used to track a set of targets and the multiple returns must be integrated into a coherent whole. this involves deciding which returns correspond to the same targets and which returns are spurious based on how close the beams come to intersecting. two approaches to this problem are to allow the MEMs to float freely or to constrain them to lie on the beams. - image recognition, in which the MEMs are used as pattern templates, encoding the spatial relationships between various edges and textures. this allows for the real time tracking of various objects. <<< REPORT 7. >>> "NEW CHALLENGES IN VLSI CAD" CS201 Lecture Jason Cong UCLA Computer Science Department ABSTRACT: In this talk the current trends in VLSI and some consequent implications were discussed. Then more specific work in routing and interconnect sizing optimization was discussed. SUMMARY: There a long running trend in VLSI of exponential growth in clock speed, die size, transistor count, and decreasing feature size. However, the rates of these growths is far different, with feature size and transistor count respectively decreasing and increasing far more rapidly than the other metrics. All of these trends make the following elements in VLSI design increasingly important: - interconnect delay - high design complexity - multilayer gridless general area routing - power dissipation minimization - reducing design cycles In particular, interconnect delay can account for 50%-70% of a clock cycle and in a DSP chip most of the power might be taken by clock and buffering. This leads to the increasing importance of interconnect optimization in VLSI design, including the issues of optimal interconnect topology, interconnect delay minimization and optimal clock distribution. This provides the driving force behind the more sophisticated modeling of wire (i.e. as transmission lines), the use of non-uniform wire width, combined device/wire optimization, and interconnect driven VLSI design methodologies. The increasing complexity of VLSI also leads to the need for more systematic design methodologies, with the following sequence being followed: - floor plans - pin assignment - global routing - detailed routing And to the use of FPGAs for rapid prototyping and design simplification. <<< REPORT 8. >>> "SCALABLE COMPUTER SYSTEM ARCHITECTURES INTO THE 21ST CENTURY" CS201 Lecture Forest Baskett Silicon Graphics, Inc. Mountain View, CA ABSTRACT: In this talk the directions that high-end computer systems will have to take to continue increasing their performance was discussed. In particular, the need for scalable multi-processing was noted. SUMMARY: This talk presented a clear overview of the current state of the art in high end computer architecture and the directions in which it will have to go to see continued performance gains in the future. The drawbacks of current bus-based multiprocessor implementations, in which all processors share main memory through a common bus, were discussed, in particular the diminishing returns of adding additional nodes in the face of increasing bus contention. A solution to this problem, network based multiprocessors with directory maintained memory coherence, in which each processor or processor node contains a portion of main memory, the rest of which can be accessed through the network, was then discussed with particular attention how they are able to create a flat coherenet system-wide memory space while scaling well as the number of nodes grows. Some directory based schemes were then discussed. This might have been one of the rationales behind SGI's purchase of Cray, as SGI's current multiprocessor architectures (Challenger, etc.) are bus-based whereas Cray's are network-based (Cray T3D, etc.). <<< REPORT 9. >>> "MODEL-BASED AUTONOMOUS SYSTEMS IN THE NEW MILLENIUM" CS201 Lecture Brian Williams Computational Sciences Division NASA Ames Research Center ABSTRACT: In this talk the design and implementation of mobile autonomous intelligence for space probes was discussed. This intelligence must be able to react to novel situations in timely and correct fashion, especially as the time constants become less favorable with respect to the communications lag. SUMMARY: Most current robotic space probes that are designed to operate with some degree of autonomy do so by being programmed to react to given situations in excrutiating detail by a team of experts back on earth. These experts will come up with a set of scenarios and for each one determine how it can be detected and what corrective actions should be taken. This approach is time-consuming, labor-intensive, fragile, and inflexible. In contrast, a new algorithmic architure is proposed in which the space probe will be given goals and it will decide based on its current situation, as constituted by its state and environment, and a set of rules, what actions are to be taken next. For example, if a particular sensor gives a reading that indicates their might be a problem, the robot, without intervention, will decide which experiments it might undertake to decide if there actually is a problem, and based on the outcome, what actions to then take. This is accomplished through the use of a knowledge base in conjunction with an inference engine. Simulations have been done in which it has been determined that this system performs at least as well as a robot programmed using more traditional techniques. <<< REPORT 10. >>> "VARIABILITIES AND PROBABILITIES OF HUMAN BRAIN ANATOMY" CS201 Lecture Brian Williams Computational Sciences Division NASA Ames Research Center ABSTRACT: This talk was concerned with the inter- and intra- mapping of brain datasets, aqcuired in multiple in and ex vivo modalites, onto each other and normal atlases. This is accomplished through the use of high-dimensional warping fields and nice SGIs. This mapping can be used to quantify brain changes over time and to provide analyze the statistical significance of brain anatomy variation from normal limits. SUMMARY: In this talk the use of specialized coordinate systems and object registration was discussed in the context of brain anatomy. As the brain is a complex object with local and global variations in scale, location, orientation, and shape, any registration techniques employed must be able to correct for all of them. This can be accomplished through the use of statistical labeling, in which each point of interest is assigned probabilities based on the likelihood that it belongs to a given set of structures. This forms the basis for algorithms in which a high-dimensional deformation field can be computed mapping a given brain dataset onto either another brain dataset or onto a brain atlas, of which several standard ones are extant. This allows for the accurate comparison of different structures over time and against normals. In addition, the deformation field in itself, as a measure of the local anatomic variation of a brain from normal anatomy, can be a useful clinical indicator.