Lycos search: Connection Machine CM5 Thinking Machines
Load average: 3.01: Lycos Dec 05, 1994 catalog, 840,327 unique URLs (see Lycos News)
Printing only the first 20 of 7242 hits on words: connection, connectiondirection, connectiong, connectionism, connectionist, connectionists, connectionless, connectionmagazine, connectionnumber, connections, connections11, connectionsbetween, connectionshome, machine, machine1, machine2, machine3, machine4, machinecult, machined, machinedesign, machinegunners, machineguns, machinehead, machineinterface, machinelearning, machinem, machinename, machinenamen, machinenames, machinery, machines, machinescript, machineshops, cm5, cm5027, cm5067, cm5096, cm5206, cm5211, cm5234, cm5282, cm5292, cm5345, cm5a12, cm5a3,
ID714356: [score 1.0000]
keys: thinking machine machines
Contact: (machine) firstname.lastname@example.org
(human) email@example.com (J. Eric Townsend)
Purpose: Discussion of administrating the Thinking Machines CM5
To subscribe, send a message to firstname.lastname@example.org with a
*body* of "subscribe cm5-managers your_full_name".
This mailing list is listed in the list of
Publicly Accessible Mailing Lists
- maintained by Stephanie da Silva.
ID767386: [score 0.9873]
title: Massively Parallel Processing
outline: Massively Parallel Processing
keys: machines connection thinking machine
Massively Parallel Processing
Two Thinking Machines Corporation Connection Machines, a 16K node
CM-200, and a 256 node, vector-equipped
CM-5E make up the major
portion of the parallel computing resources of the group.
The Center for Computational Sciences (CCS) is part of the National Consortium
for High Performance Computing
( NCHPC ).
The CCS provides research computing to government, industry, and academia.
Contact Denise Yates for account information.
In addition to providing computing services, the group uses its Connection
Machines for its own research projects.
Global Ocean Prediction project is producing massively parallel
versions of operational ocean prediction and weather forecast models.
Here is an MPEG movie of a Connection Machine Simulation
ID811240: [score 0.9678]
outline: CS-TR-3123, UMIACS-TR-93-80 University of Maryland Department of Computer Science and Department of Electrical Engineering, and
keys: machines thinking connection machine
Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields
This paper introduces scalable data parallel algorithms for image
processing. Focusing on Gibbs and Markov Random Field model
representation for textures, we present parallel algorithms for
texture synthesis, compression, and maximum likelihood parameter
estimation, currently implemented on Thinking Machines CM-2 and CM-5.
Use of fine-grained, data parallel processing techniques yields
real-time algorithms for texture synthesis and compression that are
substantially faster than the previously known sequential
implementations. Although current implementations are on Connection
Machines, the methodology presented here enables
ID286903: [score 0.9668]
keys: thinking machines connection machine
Overview of Wide Area Information Servers
The Wide Area Information Servers system is a set of products supplied by
different vendors to help end-users find and retrieve information over
networks. Thinking Machines, Apple Computer, and Dow Jones initially
implemented such a system for use by business executives. These products
are becoming more widely available from various companies.
What does WAIS do?
Users on different platforms can access personal, company, and
published information from one interface. The information can be anything:
text, pictures, voice, or formatted documents. Since a single
computer-to-computer protocol is used, information can be stored anywhere
on different types of machines. Anyone can use this system since
ID600254: [score 0.9619]
title: USING MPI
outline: Using MPI
keys: machines thinking connection machine
Portable Parallel Programming with the Message-Passing Interface
William Gropp, Ewing Lusk, and Anthony
The parallel programming community recently organized an effort to
standardize the communication subroutine libraries used for
programming on massively parallel computers such as the Connection
Machine and Cray's new T3D, as well as networks of workstations.
The standard they developed, Message-Passing Interface (MPI), not
only unifies within a common framework programs written in a
variety of existing (and currently incompatible) parallel
languages but allows for future portability of programs between
machines. Three of the authors of MPI have teamed up here to
present a tutorial on how to use MPI to write parallel programs,
ID807630: [score 0.9528]
title: SCD Computational Servers
outline: High Performance Computational Servers The CRAY Y-MP8/864 (Shavano)
keys: machines machine thinking
SCD Computational Servers
High Performance Computational Servers
Computational servers in SCD's network include:
* a CRAY Y-MP8/864
* a CRAY Y-MP2/216
* a four processor CRAY-3
* an eight node IBM SP-1
* an IBM RS/6000 Cluster, and
* a 32 node Connection Machine (CM-5)
These systems provide the computing power to run the large simulations
required by our user base.
The CRAY Y-MP8/864 (Shavano)
Delivered in May, 1990, this supercomputer has eight processors, 64
million words (Mwords) of central memory, an internal speed of six
nanoseconds (ns) per calculation, a 256-Mword Solid-state Storage Device
(SSD), and 78 billion bytes (gigabytes) of disk storage. The
CRAY Y-MP8/864 runs UNICOS, the UNIX-based operating system for Cray Research, Inc., computers. The machine
supercomputers and computing needs
ID756745: [score 0.9525]
outline: NCSA CM-5 Overview Configuration Policies Accounting Training
NCSA Connection Machine User Guide
NCSA CM-5 Overview
The Connection Machine Model 5 (CM-5) from Thinking Machines Corporation
is a massively parallel, distributed memory system that supports both
data parallel and message-passing programming. The I/O subsystem on
the CM-5 includes a Scalable Disk Array (SDA) -- a parallel disk storage
system connected directly to the CM-5 data network that provides high-speed
disk I/O -- and a HIPPI interface that provides high-speed data transfer.
NCSA's CM-5 has:
*512 node processors
*64-bit floating point and integer hardware
*16-gigabytes (Gbytes) of memory
*130-Gbyte Scalable Disk Array
Each node consists of four vector units connected by a 64-bit bus to
a SPARC CPU and a Network Interface chip.
Overview of the CM-5
ID529318: [score 0.9272]
title: AHPCRC Research: Large-Scale Simulations
outline: AHPCRC Research Projects Large-Scale Simulations of Turbulent Geothermal Convection on a Network of Supercomputers
keys: thinking machines connections machine
AHPCRC Research: Large-Scale Simulations
AHPCRC Research Projects
Large-Scale Simulations of Turbulent Geothermal Convection on a Network of Supercomputers
The following simulation makes use of some of the tools which were developed
by the Minnesota Supercomputer Center for the AHPCRC. This simulation
executed across a HIPPI network and three architecturally dissimilar high
performance computers as well as a high performance graphics workstation.
This picture shows the output as seen on the workstations screen as this
simulation is running. As the simulation runs, the window across the
bottom of the screen displays the location of the data. A small icon is
used to represent the computers involved. As each computer performs its
calculations, its icon is highlighted
Turbulent Geothermal Convection
ID388189: [score 0.7947]
title: NCSA CM-5 Welcome Page
outline: General information Current status
keys: connection thinking machine
NCSA CM-5 Welcome Page
Please note: This web server, like so many others, is under construction. I am
no where near ready to announce my presence to the world through
What's New With NCSA Mosaic . Please send any suggestions about this web server
to email@example.com. Thank you, and have fun!
may be used to view the
NCSA Connection Machine User Guide (which is under revision) or the
NCSA CM-5 FAQ .
Additionally, on-line TMC documentation can be viewed with cmview.
Further on-line documentation (and postscript files for much of what can be viewed with cmview)
is stored on the CM-5 in the
/usr/local/doc directory. For example, a partial list of (extra) software installed
on the CM-5 is in /usr/local/doc/Software
NCSA CM-5 Information
ID533234: [score 0.7894]
title: Sydney Regional Centre for Parallel Computing
outline: Sydney Regional Centre for Parallel Computing
keys: thinking connection
Sydney Regional Centre for Parallel Computing
Welcome to the SRCPC Web server. This provides information relating to high
performance computing. The SRCPC is hosted by the University of New South Wales .
The next Introductory CM5 Programming Course will be held between the
5th-8th of December at UNSW. Application
forms may be downloaded here.
* Introduction to SRCPC's CM5 * Local Hints * Introductory CM5 Course * Connection Machine CM5 Technical Summary * Latest Usage Stats for the CM5 CMSSL for C* Version 3.2 has numerous changes from V3....
AU University of N.S.W., Sydney Regional Centre for Parallel Computing
Sydney Regional Center for Parallel Computing
ID411658: [score 0.7651]
Thinking Machines' CM5
ID650849: [score 0.7523]
From firstname.lastname@example.org Wed Mar 23 09:15:06 EST 1994
Subject: Nesl: a parallel functional language
A full implementation of the NESL language and environment is now
available via anonymous FTP. NESL is a fine-grained, functional,
nested data-parallel language. The current implementation runs on
workstations, the Connection Machines CM2 and CM5, the Cray Y-MP and
the MasPar MP2.
NESL is loosely based on ML. It includes a built-in parallel
data-type, sequences, and parallel operations on sequences (the
element type of a sequence can be any type, not just scalars). It is
based on eager evaluation, and supports polymorphism, type inference
and a limited use of higher-order functions. Currently it does not
have support for modules and its datatype definition is limited
ID411653: [score 0.7519]
title: AIMS Home Page
outline: An Automated Instrumentation and Monitoring System Examples of usage
AIMS Home Page
An Automated Instrumentation and Monitoring System
Detailed information about AIMS can be obtained by clicking on parts of the following slide
AIMS consists of a
suite of software tools
for measurement and analysis of performance; it includes
* xinstrument : a source-code instrumentor that supports Fortran77 and C message-passing programs written under three communication libraries: NX, CMMD, and PVM;
* monitor :
a library of timestamping and trace-collection routines that run on
and Paragon ,
Thinking Machines' CM5 ,
as well as networks of workstations (including Convex Cluster,
SparcStations, and SGIs connected by a LAN);
* tpp : a utility for removing monitoring overhead and its effects on the communication patterns as recorded
ID133763: [score 0.5468]
keys: connection connections machines machine
A Compile Time Model for Composing Parallel Programs
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Many distributed memory machines support connection-based
communication instead of or in addition to connection-less message
passing. Connection-based communication can be more efficient than
message passing because the resources are reserved once for the
connection and multiple messages can be sent over the connection.
While long-lived connections enable more efficient use of the
communication system in some situations, managing connection resources
adds another level of complexity to programming such machines. iWarp
is an example of a distributed memory machine that supports long-lived
ID739073: [score 0.5211]
outline: CU-SeeMe Video conferencing experiments
keys: machines connections
CU-SeeMe Video conferencing experiments
* CU-SeeMe stuff (useful information)
* NASA Select TV live.
* Remote controlled pan & tilt video camera.
Screendump from a CU-SeeMe session. All three
participants are exchanging video through a reflector at NYSERNet,
Liverpool New York.
From time to time I will be transmitting live video from our
home. The transmissions are made from a Macintosh using the Cu-SeeMe
package from Cornell. You can watch these transmissions from any color
Macintosh (with a direct IP Internet connection) running CU-SeeMe by
connecting to the reflector machine fenris.hiof.no (188.8.131.52)
at the MultiMedia Lab in our Computer Science Department
(Østfold Regional College, Halden, Norway). Other reflector
can be found on the list
CU-SeeMe reflector sites etc.
CU-SeeMe Video-Conferencing Experiments
Omtale og eksempler av CU-SeeMe
ID161607: [score 0.5187]
keys: connection machinery machines machine
Select one of:
* ACM SIGGRAPH Online Bibliography Project
* Association for Computing Machinery (ACM) gopher
* Computer jargon dictionary (search)
Connection Machines's FORTRAN manual (search) * DFN-CERT Security Archive * Functional Programming Abstracts (search) * Guide for finding source code (search) * High Performance Computing Newswire (HP...
North Carolina State University Library gopher
ID294287: [score 0.5118]
keys: machinename machine machines connection
Transferring Files to and from Another Machine Using FTP
FTP stands for File Transfer Protocol. It can be used to
transfer files across the network between any machines running
FTP is typically used in one of two ways: (1) to transfer files
between accounts on two different machines, and (2) to obtain
files made available for public distribution by other machines.
If you want to transfer files between accounts, say between your
account and that of a colleague at another university, you must
know the machine name, account and password of your colleague's
account. Then start FTP like this:
where "machinename" is the full network name of your colleague's
machine (probably something like
ID754830: [score 0.4431]
title: Sorting for Particle Flow Simulation on the Connection Machine
outline: Sorting for Particle Flow Simulation on the Connection Machine Abstract
keys: connection machines machine
Sorting for Particle Flow Simulation on the Connection Machine
by Leonardo Dagum
RNR Technical Report RNR-90-017
This paper investigates the sorting requirements of
a particle simulation and analyzes the sorting algorithms currently
in use on sequential, vector, and data parallel implementations
of particle flow simulations. Particle simulation requires sorting n
integers in the range [1, O(n)] and takes O(n) running time
on sequential or vector machines.
The data parallel implementation of a particle simulation is shown to
be non--optimal with running time O(n \log n).
Until recently, there have been no optimal parallel integer sorting algorithms.
This paper presents an optimal deterministic algorithm
for parallel sorting in a particle
ID756628: [score 0.4408]
outline: Acquisitions and Upgrade Paths
keys: machine machines thinking
NCSA Scalable Metacomputing Strategy, April 1994
Acquisitions and Upgrade Paths
An SGI Power Challenge has been purchased and will be available
by May 1994. This machine will initially have 32 R4400
processors (150 MHz, 75 Mflop peak), 2 Gbytes of shared memory,
and 80 Gbytes of RAID disk. In July 1994, this machine will be
upgraded to 16 TFP superscalar processors, each running at 75
MHz and 300 Mflop peak speed. This machine will be binary
compatible (for single processor applications) with R4400-based
SGI workstations. In addition, NCSA will experiment this
summer with a Houston-based SGI Challenge Array consisting of
four networked Challenge machines with 20 processors each.
This machine will be used to evaluate both high performance
science and engineering
Acquisitions and Upgrade Path
Convex Computer Corporation
Silicon Graphics Inc.
Thinking Machines Corporation
ID683514: [score 0.4407]
title: MPEG Movie Archive
outline: Latest News
keys: machine machines
MPEG Movie Archive
October 26, 1994
Machines within our university now have unlimited access to
the archive. For machines within the .nl domain the R-rated
section has been opened again. Questions regarding access for
machines outside the Netherlands will not be answered.
If all goes well and the traffic doesn't increase drastically,
I will be opening the R-rated section of the archive during
the weekend for all machines soon.
October 3, 1994
A sad day in the history of the MPEG archive..... . Today
I had to close the R-rated section of the archive because the
machine it is running on (Sun SPARCstation 10 with 32 MB of
memory and 100 MB swap !!) was getting into memory problems.
The main reason for that is that each connection occupies about
back to the Lycos Home Page.
Lycos 0.9beta4 06-Dec-94 / 8-Dec-94 / email@example.com