LIBRARY

  • Schussel gift helps Dana-Farber scientists explore new territory

    Source: Dana Farber Impact, Published: Winter 2018

    Both Sandi and George Schussel believe that knowledge is power, and, with more knowledge, the end of cancer is inevitable. When Sandi was diagnosed with angioimmunoblastic T-cell lymphoma (AITL) and began treatment at Dana-Farber with oncologist Matthew Davids, MD, MMSc, she and her family learned that the genetic drivers of her rare blood cancer were not known. In response, the Schussels made a new gift of $100,000 to the previously established Sandra and George Schussel Family Fund at Dana-Farber to research the drivers of T-cell lymphomas, including AITL. The fund will be managed under the direction of David Weinstock, MD, whose lab focuses on T-cell lymphomas. “As a nurse, I am very interested not only in advancing discoveries, but also in encouraging the sharing of information within the research community,” Sandi said. Their gift is already having a significant impact. With support from the Schussels, an instructor in Weinstock’s lab, Samuel Ng, MD, PhD, created the first cell line for this disease. “This cell line will allow us to determine which genes AITL depends on for survival and to find drugs that specifically kill it,” Weinstock said. “The Schussels’ gift has helped us make a completely unique resource for the whole lymphoma research community.” George shares this commitment to increasing knowledge of this “orphan” disease: “The more we know about what we’re battling, the easier we can find cancer’s weak point, and we can treat it with agents less injurious to patients.” ■

    Download PDF

  • Schussel Family Fund promotes T-cell lymphoma research

    Source: Dana Farber Impact, Published: Spring 2017

    Two years ago, when Sandi Schussel was diagnosed with a rare T-cell lymphoma, she was so ill she could do not much of anything, including skiing or hiking, two of her favorite outdoor activities. Today, after treatment at Dana-Farber, she is in complete remission and looks forward to getting back on the slopes and trails of New Hampshire. Grateful for the care she received, Sandi and her husband, George, have created the $100,000 Sandra and George Schussel Family Fund to support T-cell lymphoma research under the direction of her physician, Matthew Davids, MD, MMSc, and David Weinstock, MD. “This generous gift allows us to explore new therapeutic combinations in the laboratory using novel techniques developed here at Dana-Farber,” said Davids. “We hope this work will directly lead to new clinical trials for patients with T-cell lymphoma.” Specifically, Davids will employ a technique pioneered by Anthony Letai, MD, PhD, and colleagues at Dana-Farber called dynamic BH3 profiling to determine which drug combinations are most promising for killing T-cell lymphoma tumor cells. Weinstock will use new tools for studying T-cell lymphomas, known as a patient-derived xenograft, to predict human tumor responses to anticancer drugs. “We have gained so much from Dana-Farber’s research and medical treatment,” said George. “They have given us our lives back, so that we can start to hike again and take advantage of the wonderful New England countryside we live in.” ■

    Download PDF

  • Schussels provide pivotal funds for vital data analysis

    Source: Dana Farber Impact, Published: Fall 2018

    Since Sandi Schussel’s successful treatment at Dana-Farber for a rare T-cell lymphoma in 2015, she and her husband, George, have become active supporters of cancer research. They have made generous gifts supporting DanaFarber research into breakthrough cures for lymphomas, and in August they appeared with Sandi’s oncologist, Matthew Davids, MD, MMSc, on the WEEI/NESN Jimmy Fund Radio-Telethon presented by Arbella Insurance Foundation to tell her story of success and encourage listeners to give. Motivated by George’s professional expertise in database technology, the couple wondered whether gene sequencing, the resulting big data, and artificial intelligence could further progress in the conquering of cancer. “Because it’s now much more common for patients to get their tumors sequenced,” explained Dana-Farber’s David Weinstock, MD, “it’s important to look at a large set of samples and understand the frequency with which we find something that actually changes the treatment.” Sandi and George’s most recent gift of $100,000 is helping Weinstock and his colleagues analyze a large set of gene sequencing samples, collaborating with investigators from multiple institutions to gain new insights into which gene abnormalities are important and targetable. “The Schussels provided seed money at a pivotal time,” said Weinstock. “Sandi and I were both trained in the scientific method and have faith that understanding the human genome and its relationship to cancer will provide an important new boost to cancer therapies, thereby benefiting all of mankind,” said George. ■

    Download PDF

  • Six degrees of celebration

    Source: ROI, MIT Sloan alumni magazine, Published: March 2000

    The extraordinary story of one family’s gift to MIT

    Dr. George Schussel, Chairman and CEO of DCI (Digital Consulting, Inc.), was recently at an MIT Sloan capital campaign event, talking to the people at his table. “Everyone was wearing their badges — you know, MS ’61 and so on — and I didn’t have one,” he says. “Someone asked me about it and I said, ‘I’m not an alum, although I did take some classes at MIT.’ I got some strange looks from around the table.” Schussel has that effect on people. A man of exceedingly high energy and a gifted raconteur, he is always in the middle of things. From the timely founding of his Andover, Massachusetts-based company, which is a pioneer in technology education, to his sage advice to his then high-school-aged daughter a few years ago, Schussel seems to have a magic touch. Why, then, are he — a non-MIT alumnus — and his wife Sandra, a Vice President and Cofounder of DCI, making a gift of $2 million to the MIT Sloan School to endow a Professor of Management Science chair? When asked, “Why MIT Sloan?” Schussel doesn’t hesitate to offer a reason. In fact, he offered six good reasons. His family members hold six degrees from MIT. It all began with a conversation with Jack Rockart, Director of the Center for Information Systems Research at MIT Sloan…

    Download complete text as PDF

  • Real-Time Enterprise

    Source: Business Week Published: Mar 2002

    In 2002, George Schussel created the “Real Time Enterprise Conference” for DCI. The first event of this type was held in San Francisco at the Palace Hotel. Business Week magazine was a co-sponsor of the event and ran a special edition section on the concept of a real time enterprise. Business Week said “Think of it as the “big bang” of business. Where the 1990’s might be thought of as the Internet Age, this decade will be known as the Real Time Age.” This article focuses on creating sustainable competitive leadership through a synergy of business and technology.

    Download PDF

  • OPTIMUM INVENTORY SCHEDULING

    Source: Management Science  Date: August, 1969

    This paper describes four computer programs that perform what were formerly management-decision functions in inventory scheduling. The first program solves the classical EOQ problem (uniform inventory usage) with quantity discounts. The second program, Economic Requirement Batching (ERB), uses dynamic programming, a heuristic search algorithm, and the relation between shipment size and material cost to locate the optimum delivery schedule for any deterministic schedule of discrete (irregular) requirements. The third program, the Alternative Delivery Schedule Evaluator (ADSE), compares any alternative delivery schedules that meet requirements; it calculates all costs associated with inventory and displays both the optimum alternative and the opportunity losses incurred by inferior alternatives. The fourth program, the Alternative Delivery Schedule Generator (ADSG), solves the difficult problem of optimally scheduling inventory in those cases requiring vendor production to special specifications and, accordingly, where the exact price of the item is unknown and depends on the manner in which it is produced. This situation is differentiated from ERB because ERB analyzes standard vendor shelf items that can be delivered according to any schedule. With little input data, ADSG generates a few highly efficient alternative delivery schedules upon which the vendor quotes. The returned bids are then evaluated by ADSE, which determines the optimum delivery schedule. Since over three years have elapsed between when this work was done and the publication of this article, it concludes with a view of the program’s successes and failures and some relevant empirically observed relations.

    Download PDF

  • Workload Balancing and Inventory Minimization for Job Shops

    Source: The Journal of INDUSTRIAL ENGINEERING  Date: April, 1968

    The purpose of this article is to discuss a specific algorithm for forecasting and balancing the workload in a job shop. This algorithm provides a procedure for combining economic analysis and workload forecasts into an efficient economical schedule for a job shop. The studies resulting in the preparation of this article were performed in an aerospace company possessing a rather large and complicated job shop. The complexity of the jobs passing through the job shop was such that in some cases six to nine months of flow time was required to process a single job in its entirety. In almost all cases, the planning and release of orders occurred months ahead of the actual time that the finished product was needed, because no method of forecasting and controlling the workload in the job shop existed, and, in order to provide efficient utilization of labor and machines, it was necessary to have a large inventory of work waiting behind every machine on the floor. This unnecessarily large inventory enabled most machines to be used continually and most jobs to be turned out on a timely basis. This situation of excess inventory and its associated carrying cost is typical in the aerospace industry and may be to a limited extent characteristic of all job shops.
    Download PDF

  • JOB-SHOP LOT RELEASE SIZES

    Source: Management Science  Date: April, 1968

    A predecessor to the concepts of just in time inventory and production was the idea of optimizing the production of material with the assistance of operations research models. Schussel here proposes an original model, (ELRS, Economic Lot Release Size) for determining optimal job-shop lot sizes. Since the setting of the lot release size was one of the most important controlling parameters in the scheduling of a job-shop, the improvement in technique described in the article resulted in substantial savings to job-shop operation costs. In those cases where management was already using an EOQ (Economic Order Quanity) method for determining lot release sizes, the input data and system required for the implementation of ELRS was basically similar and therefore the new and improved analysis could be substituted at relatively minor inconvenience to the user. For those job-shops which set lot release sizes by informal means, the ELRS concept provided enough of an improved technique to induce a use of the analysis.

    Download PDF

  • Simulation in Business

    Source: Datamation  Publication date: June, 1967

    This early Datamation article is a succinct summary of Schussel’s doctoral dissertation at Harvard Business School. That dissertation was researched and submitted in 1964. That was a relatively early period in the use of computers for business purposes and especially in the new field of simulation of human behavior through a rules based algorithm. Schussel’s researched showed the possibility of using the scientific approach of simulation in a business environment. Whereas the usefulness of simulation to help solve scientific and engineering problems had been readily accepted for some time, it was a new concept to the business or operational community. One of the primary reasons that detailed simulation of the business environment had become practical was because the ability to manipulate and process large amounts of data rapidly had grown fantastically with each new generation of computers. By 1967 simulation had grown up to the point where universities had begun teaching courses in it. This substantial growth and interest resulted in an increase in relevant theory and literature. Also a new sub field of simulation, called behavioral theory had been developed and expounded by researchers at various universities, notably Carnegie Tech.

    Download PDF

  • Sales Forecasting with a Human Simulator

    Source: Management Science  Publication Date: June 1967

    This article summarized the work done by George Schussel in his doctoral thesis at Harvard Business School. His work investigated the possibility of using computer simulation to replace forecasting work done by sales executives. The simulation was of the human behavior characteristics of a diverse group of retail dealers. The purpose of the simulation was to propose and test a new method of forecasting a manufacturer’s sales to his retail dealers. A simulation model of dealer behavior was constructed and proved very effective in helping the manufacturer to generate good sales forecasts. Unsophisticated retailers were found to have a sufficiently systematic set of procedures to permit simulation of these procedures by a computer model and it was concluded that effective simulation of the human decision processes of a large nonhomogeneous group of businessmen is possible.

    Download PDF

  • Characteristics and Problems of Aerospace Company Management

    Source: Journal of the Astronautical Sciences  Publication Date: January, 1967

    In the mid 1960’s the aerospace/defense industry was the largest industry in the United States in terms of both sales and employment. At that time it accounted for 25 % of all capital goods produced in the US.  Also in the mid 1960’s the aerospace/defense industry employment was well over 2,000,000. And because of the Vietnam war that employment grew greatly as the 1960’s flowed into the 1970’s. According to the U. S. Department of Commerce, the employment in the aerospace/defense was larger than the primary metal industries and the motor vehicle and equipment industries combined. The industry was (and still is) especially important to the U. S. balance of payments since aerospace/defense exports were and still are one of the largest items in the U.S. balance of payments, which underpins the value of the dollar.

    Download PDF

  • CRM Magazine talking about George Schussel’s role in the Customer Relationship Management (CRM) Market activities

    Source: CRM Magazine  Date: February, 2003

    Casual discussion and background review of some of my work. Interestingly, predictive analytics is mentioned and we only see that really starting to happen in 2012.

    Download PDF

  • Software Debate: Sifting the Marketplace

    Source: Information Week  Date: March 1988

    In 1988, InformationWeek magazine assembled a group of software gearheads to discuss the pressing issues of that time. I was invited to be one of group and accepted. The conversation was focused on discussing what a CIO (chief information officer) needed in order to do a superior job. Among the topics of discussion were: purchasing software packages vs. custom development, how to find the right software vendor, the emergence of standard DBMS software packages and advanced development languages for those environments, IBM’s influence on emerging standards, and the rationale for outsourcing a company’s IT department.

    Download PDF

  • Your EDP Department-How Good Is It?

    Source: Atlanta Economic Review  Date: January, 1976

    The results of interviews and research concerning how the quality of information technology department operations is inferred. This research was done by interviews and interactions with CIO’s of dozens of companies.
    Download PDF

  • Wall Street Automation: A Primer

    Source: Datamation  Date: 1969

    In 1969 I spent full time supervising work that Informatics was doing for Wall Street Brokerage firms. We had contracts for both front end (message and transaction switching) and back office (accounting and reporting) automation. I was surprised at how complicated these applications were. This article goes into some detail on what must be done in brokerage transaction recording and accounting systems.

    During the latter part of 1967, all of 1968, and the early part of 1969, it was hard to pick up any financial journal without reading an article about the problems that stock brokerage firms were having in the back-office processing of securities orders. The front office is responsible for selling securities and thereby generating a brokerage firm’s income, but the back office provides the production capabilities to process trades generated by the front office. Insufficient back-office capabilities have caused grave crises for some brokerage firms, and for the securities industry as a whole. As an example, in September of 1969 the Securities and Exchange Commission (SEC) announced that it had fined one of the largest brokerage houses $150,000 for violations of SEC regulations caused by the firm’s back office in the preceding year.

    Download PDF

  • Client/Server Software Architectures–An Overview

    Source: Carnegie Mellow Software Engineering Institute  Date: 2001

    State of the art discussion referencing work done by Herb Edelstein and George Schussel. From the article:

    “The term client/server was first used in the 1980s in reference to personal computers (PCs) on a network. The actual client/server model started gaining acceptance in the late 1980s. The client/server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, time sharing computing.

    A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the software configuration. For details on client/server software architectures see Schussel and Edelstein [Schussel 96, Edelstein 94].

    This technology description provides a summary of some common client/server architectures and, for completeness, also summarizes mainframe and file sharing architectures. Detailed descriptions for many of the individual architectures are provided elsewhere in the document.”

    Download PDF

  • Distributed DBMS Evaluation

    Source: Eight chapter consulting report prepared for a private client by George Schussel  Date: 1987

    By the mid 1980’s the idea of using a database management system for the foundation of corporate information systems was well established. But the immense popularity of the IBM PC beginning in the early 1980’s changed the landscape of corporate computing. As all kinds of applications migrated to the PC platform, IT departments found themselves with the charge of supporting computing across mainframes, minicomputers and PC’s. Of course it was only natural that the architectural foundation of database management would migrate into a distributed environment. This 1987 research paper was commissioned by a large equity investor and is here published for the first time.

    Download PDF

  • Mapping out the DBMS territory

    Source: Data Management Magazine  Date:February, 1983

    In the 1970s there were no more than two dozen widely marketed DBMS product lines. Non-IBM DP shops, using equipment such as Univac, Honeywell or Burroughs, simply took the DBMS offered by the hardware vendor. IBM shops could choose IBM’s IMS and DL/1 products, or they had a choice of a handful of successful, competing products such as  Software AG’s ADABAS, Cincom Systems’ TOTAL and Cullinane’s (Cullinet) IDMS.

    Today, there are 75 to 100 qualified software vendors marketing DBMS packages for all types of computers-from micros to mainframes.  Many different logical models are represented in the products being actively marketed: Hierarchical, inverted, CODASYL, master/ detail and relational.

    This article provides an overview of where the DBMS market was heading in the early 1980’s.

    Download PDF

  • The Role of the Data Dictionary

    Source: Datamation  Date: June, 1977

    This article introduced the idea of data dictionaries as a complement to the increasing ubiquitous of DBMS. The primary purpose of a data dictionary being to aid in the control of the corporate data resource, help in reducing programmer error and support the function of program documentation.

    For several years, not only has it been in vogue but also considered good business practice to devote closer attention to the management of data as part of the corporate EDP function. Witness to this was the rapid emergence and acceptance of data base management systems (DBMS) as primary control software in manipulating information files. Many felt that the arrival and acceptance of data base concepts and data base management systems was the single most important happening in the data processing field since the development of operating systems.

    Data dictionaries were part of this ” happening.” Altogether different types of software products from DBMS’s, data dictionaries became widely available and readily accepted as primary tools for better data management. They were used with or without DBMS’S too, as the two package types were complementary, not mutually exclusive.

    A data dictionary was/is a repository of information about the definition, structure, and usage of data. It did not contain the actual data ·itself. Simply stated, the data dictionary contains the name of each data type (element), its definition (size and type), where and how it’s used, and its relationship to other data.

    The purpose of such a tool was to permit better documentation, control, and management of the corporate data resource, goals which may or may not be achieved through the use of a DBMS. Advanced users of data dictionaries have found them also to be valuable tools in the exercise of project management and systems design.
    Download PDF

  • QUAND RENONCER A L’UTILISATION D’UNE BASE DE DONNEES

    Source: L’informatique  Date: 1976

    French language discussion of some of the reasons and cases why you wouldn’t use a data base management system (DBMS) and / or where the situation recommends a different technology (during an era where everyone was touting DBMS as the right technology to use for pretty much all commercial applications).
    Download PDF

  • When Not to Use a Data Base

    Source: Datamation  Date: November, 1975

    At the time period of  this article, Data base technology was too complicated and expensive a tool to use for all business applications.

    From the article: “The data base approach is relevant and essential to data processing development during the remainder of the ’70s and through the ’80s. Wise organizations will use the checklist in this article to see if the time is right for them to move to DBMS, An intelligent approach is to move slowly but surely into the data base environment. Careful consideration and analysis of the relative advantages and disadvantages of the data base approach is needed before hopping onto the data base bandwagon.”

    Download PDF

  • Data Bases

    Source: Chapter 32 from the book, Information Systems Handbook  Date: 1974

    An overview (36 pages) of the state of the art in the database management (DBMS) field from the early 1970’s. From the chapter….

    “…the structuring of the data has always been with the goal in mind of achieving optimum performance for retrieval of that data for the one particular application that used it, the reuse of the same data for other applications has been difficult or impossible. As a result, when new needs for this same information came along similar data was restructured in files which fit the next application, and so forth. The end product after a period of years with multiple developments has always been much data redundancy and

    the attendant expensive problems of much storage, difficult access, and poor control.

    The DB approach is “data” rather than “program” oriented and builds the one DB to provide good accessibility to data for all of the applications….”
    Download PDF

  • Data Base: A New Standard for Insurance EDP

    Source: Best’s Review Date: October, 1972

    A NEW CONCEPT for business data processing was just beginning to be heard about in 1972. This field was beginning it’s long path toward achieving ubiquity in corporate EDP shops. It was already achieving significant acceptance by computer manufacturers and major users of computer systems. This new technique used standard “data management systems”, (DMS) software packages to implement business data processing on a “data base” (DB).

    The DB concept was in stark contrast to the traditional approach of storing data. It said “instead of storing identical information in different places, we’ll put it all in one place.” And insurance companies, who were basically in the business of processing data, promised to be prime candidates for the new technology.

    Download PDF

  • BUSINESS EDP MOVES TO DATA BASES

    Source: Business Horizons  Date: December, 1972

    In a very prophetic article written at a very early date (1972), the author accurately predicts the emergence of data base management systems to become a standard in business EDP everywhere. As the article states “A new concept in business data processing is achieving significant acceptance by both computer manufacturers and major users of computer systems. This new technique uses standard “data management systems” (DMS) software packages to implement business data processing on a “data base” (DB). Currently the number of American DMS users is estimated to be approximately 600. DMS availability offers the possibility of reducing the long-term costs of data processing and of increasing the capabilities of the business programmer by automating many of the functions now performed manually. The DB concept has so developed over the last decade that it can be implemented by most medium and large-stale users of data processing equipment. In the 1960s COBOL programming language emerged as the standard language for business DMS; in the 1970s a combination of data bases and COBOL will become the standard procedure for implementing business data processing systems.”

    Download PDF

  • Management Information

    Source: Forum 69 Published: October, 1969

    Information centers, data warehouses, search engines, and the Internet have all been proposed solutions to the discovery of information (answers) and the creation of knowledge from data, that unstructured assemblage of facts. But all of these inventions happened after that magic moment of the IBM 3330 disk drive (YES, 100 MB per disk spindle!), which arrived in 1970. This article on finding the buried nugget in a mountain of information gave the state of the art in 1969.

    Download PDF

  • Advent of Information And Inquiry Services

    Source: Data Management Magazine  Published: September, 1969

    Prior to the advent of Google, the Internet and data warehouses, the problems of finding the information you needed was well understood. In this article Schussel talks about the 1960’s advent of information inquiry services, the predecessor to information centers and the later turn of century Internet based solutions.

    Even in the 1960’s about the costliest and most time consuming problems facing industry and government were the storage, retrieval and communication of knowledge. Human knowledge was understood to be expanding at such a tremendous rate that the documentation supporting this advance was concurrently mushroomed in a multitude of directions. They knew that computers would solve the problem, but it would still take a few years.

    Download PDF

  • Computers & Information

    Source: Collegiate Annual  Date: 1969

    Prepared as a career guide for young collegians, this article recommended the burgeoning information technology field as a top career choice.

    “You have just spent four or more years learning how to effectively manipulate information to get answers to problems. The problem might be one in engineering, physics, mathematics, or business administration. You solve the problem by the application of calculus, linear programming, legal concepts, stress analysis, or circuit theory. In most problems the basic data has been given in the text, and you needed to know the theory on how to use that data. Once you leave school, you’ll find that most real-world problems are just the other way around. The most difficult part of a problem is to get the basic data. Once you have the basic data, a fairly simple analysis will usually give you a good answer.

    “It is this unsolved problem of getting data in usable form that has resulted in the tremendous growth of the information systems field. Much of this growth has resulted from the advent of computers with tremendous capabilities for rapidly handling and manipulating information. The modern scientific community has created a fantastic wealth of information. Over 90% of all scientists who ever lived are alive today, working, creating discoveries and new information. In order to be useful, this information must be selectively disseminated to those who are interested in it, must be stored, and must be retrieved when necessary. This necessity for handling information has created the new field and science of information systems. Most computer oriented companies.

    Download PDF

  • 23 September, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Sep 2005

    • Report from Database and Client/Server World, part II
    • Downsizing at Chrysler Financial, by George Schussel
    • I’m from the Government and here to help you
    • Technologies for business re-engineering, part I

    Download PDF

  • VII Pillars Of Productivity

    Source: Optimize Published: May 2005

    Professor Erik Brynjolfsson, holder of the Schussel Chair of Management Science at MIT, has been doing breakthrough research on how companies can achieve superior returns from their investments in information technology. This article summarizes the most recent research from his team. Some comments from the article follow:

    It’s productivity growth, more than any other economic statistic, that determines our living standards. If productivity grows at 1% per year, living standards will double every 70 years. If productivity grows at 3% per year, living standards will double in less than 25 years. In the 1970s, ’80s, and early ’90s, government statistics indicated that productivity growth in the U.S. economy plodded along at barely 1% annually. Most economists thought it would stay at that level indefinitely, but remarkably, since 1995, annual productivity growth has averaged more than 3%.

    What explains the soaring productivity in the United States? My research has found that IT-intensive companies tend to be more productive, and most economists now agree that growing investment in information technology has been the single most important reason for the resurgence in the past decade. However, it’s also true that, since 1995, thousands of IT projects have failed to deliver on their productivity promise each year. CEOs and CFOs are justifiably disappointed at CIOs’ inability to consistently document financial benefits from their IT budgets. In fact, they’ve been looking in the wrong place.

    The key to IT productivity lies outside the CIO’s office. Our most recent research suggests that whether IT improves productivity depends primarily on the complementary organizational investments that companies make in addition to their IT investments. That is, innovation in IT alone is insufficient. Companies also need innovation in organizational practices to reap the promised boost in productivity growth. Considering that up to 70% of the work done in large companies can be classified as information-processing work, it would be remarkable if the effective use of IT didn’t require changes in the organization of production. Indeed, we found that the costs and benefits of IT-enabled organizational capital is typically many times larger than the direct IT investments themselves.

    Download PDF

  • Home page of Erik Brynjolfsson

    Source: Sloan School Web-site Published: Jan 2005
    Home page of Erik Brynjolfsson, the George and Sandi Schussel Professor of Management MIT Sloan School of Management; the Director of the Center for eBusiness at MIT, and the Co-editor of the Ecommerce Research Forum. His research and teaching focuses on how businesses can effectively use information technology (IT) in general and the Internet in particular.

    Download PDF

  • What, Me Worry? You Bet!

    Source: eWeek Published: Aug 2003

    George Schussel rebuts the viewpoint expressed by Nicholas G. Carr in the Harvard Business Review article “IT Doesn’t Matter” (May 2003). Carr argues that the IT industry is rapidly moving toward commodity status. He says IT is reaching the level of railroads and electrical power. Schussel disagrees and states that “The most successful companies of our time have achieved their success on the back of a superior investment in and strategy for IT. Winners such as Wal-Mart, Dell, Expedia, eBay, FedEx and UPS have built hugely successful enterprises in this way…Strategic IT is, by definition, not a commodity. If we starve our investment in IT, we can’t count on the rest of the world to stand still. For some companies, the answer is outsourcing…This isn’t good. Once you’ve outsourced everything, you’ve made Carr’s vision your reality. Where is your competitive advantage? Maybe you should worry.”

    Download PDF

  • Getting IT Together

    Source: CRM Magazine Published: Jul 2003

    With phrases like real-time enterprise, 360-degree view of the customer, and single instance of truth floating around the business world, there seems to be a lot more talk than actual substance to many of the claims that companies can access ERP, CRM, and other systems instantly from one interface. But integration can enable this to happen, says George Schussel, founder of DCI. “Integration is the backbone of where things are going, moving towards the real-time enterprise,” he says. “How a company accomplishes this can take a number of directions,” including looking at integration from the application layer, a business processes layer, or at the data layer.

    Whatever the path to integration a company takes, it must first understand how open standards are affecting integration efforts, consider the dos and don’ts of integration, and learn from how other companies are successfully integrating CRM with back-end systems, and data sources with CRM.

    Whatever system a company chooses, before integration can work, IT managers need to decide what parts of the organization should be integrated first, Schussel says. “You can’t integrate everything at once,” he says. “You need to understand your business processes, find out what it takes to stay competitive, and integrate the data and systems that will help you do that first.”

    Download PDF

  • Techie Musings

    Source: HOW DATABASES CHANGED THE WORLD Published: Jun 2003

    A review of the history making evening at the Computer History Museum when some of the biggest names from the world of databases met to review how the world of computer data base management systems had evolved. The name of the evening was “HOW DATABASES CHANGED THE WORLD” and it is available from the Computer History Museum on videotape or DVD. “I’ve just spent an hour and a half watching a presentation from the Computer History Museum: How databases changed the world. Featuring some big names from the database world: Chris Date, Herb Edelstein, Bob Epstein, Ken Jacobs, Pat Selinger, Roger Sippl and Michael Stonebraker, with moderator George Schussel. Quality stuff…If you’ve got the time, worth watching.”

    Download PDF

  • Is Real-Time for Real? Are vendors truly enabling businesses to operate in real time?

    Source: CRM Magazine Published: Mar 2003

    It’s a familiar story by now: Dell Computer Corp.’s real-time order and fulfillment system has resulted in customer satisfaction in excess of 97 percent and has helped to propel Dell to the number one slot in the computer industry. This approach was not created by some magic spell that Michael Dell conjured; it is simply one step to becoming a real-time enterprise (RTE)–and it is possible at your company, too.

    A real-time enterprise is able to share information with employees, customers, and partners in real time, anytime. This allows the company to instantly alert appropriate individuals, whether it be the company president, sales or marketing directors, or product managers, to changes in customer demand, inventory, competitive actions, and profitability.

    The RTE is the stuff a CFO’s dreams are made of: being able to view a company’s books in a moment’s notice and instantly see a full range of financial results for the day, week, month, year, etc. It makes call center managers salivate when they imagine never putting a customer on hold, and their agents having immediate and simultaneous access to a caller’s history and analytical information that can aid in cross-selling and upselling. Sales executives thrill at the idea that they can anytime, in real time, know what is happening in their sales pipeline.

    All these scenarios are possible, and will be, according to pundits of the real-time enterprise. “The real-time enterprise is the single most exciting movement in IT in the past decade,” says George Schussel, chairman of DCI Inc. And with so many vendors offering at least some type of real-time component in their product offerings, it is not hard to see why Schussel feels so strongly about the real-time enterprise.

    Download PDF

  • Building real time systems for the US Navy

    Source: Washington Technology Published: Feb 2003
    Building real time systems for the US Navy is the subject of this article. “The iOra technology “shortens the time frame from when something is happening to when you can respond intelligently,” says George Schussel, former CEO of Digital Consulting Institute, a company that tracks the information technology industry. “The benefits of pulling a human out of the loop are cost savings and better decision-making in real time.”

    Download PDF

  • Go Real-Time or Go Home: Experts at DCIs Real-Time Enterprise Conference say companies must operate in real-time to survive.

    Source: CRM Magazine Published: Dec 2002

    Addressing an SRO crowd in the ballroom of San Francisco’s Palace Hotel, the keynote speakers at DCI’s premiere Real-Time Enterprise Conference held this week said that companies must be able to access vital information in real time to compete in a changing business atmosphere. They also agreed that the real-time enterprise is not about the enabling technologies, but about the business processes they support.

    George Schussel, founder and chairman of DCI, called the real-time enterprise the “single most-exciting IT initiative in years,” and noted that companies need to integrate their systems in real-time to cut expenses, increase revenues, and enhance the overall customer experience enough to create a long-term competitive advantage.

    Barton Goldenberg, a cochairman of the conference and president of ISM Inc., said the business process and technology enabling the real-time enterprise will come to be known as “the greatest accomplishment of the decade.”

    Download PDF

  • A Step Ahead, An interview by CFO Magazine

    Source: CFO Magazine Published: Oct 2002

    An interview by CFO Magazine with MIT economist Erik Brynjolfsson. Brynjolfsson leads the charge toward a greater appreciation of IT. For the past 15 years, Erik Brynjolfsson, the George and Sandi Schussel Professor of Management at the Massachusetts Institute of Technology’s Sloan School of Management, has been studying the economic impact of corporate IT investment and, more recently, the strategic drivers behind E-business. As a longtime believer in the power of IT to improve business productivity, he has at times been something of a lone voice in the wilderness; only recently has economic data seemed to confirm his optimistic view.

    Download PDF

  • Conversations with George Schussel

    Source: Community B2B Published: Jun 2002
    Conversations with George Schussel, published on Community B2B.com, 2002 Various Q&A sessions with George Schussel on topics about the real time enterprise.

    Download PDF

  • Investment Opportunities from the Home Digital Front

    Source: PC magazine Published: Mar 2000

    This white paper was prepared in 2000 by George Schussel for publication in a popular newsstand PC magazine. Some quotes from the article follow:

    “I was recently interviewed by a magazine on the subject of what the “hot” trend in home computing will be for 2001. Anyone who can accurately predict these trends can more easily figure which companies are likely to grow at large rates. After thinking about it for a while, my choice for the most practical, useful home digital device for 2001 came down to digital cameras. In the note below I’ll talk about why and cover some of the other supposed “hot” new trends. The choice of digital cameras as the technology, which is the most real and will have the most home impact in 2001, seemed pretty easy to me. Digital photography has been around for a number of years, of course. But only recently have I seen multi-megapixel cameras that can meet or exceed the print and display quality of single lens reflex 35mm film cameras. The miniaturization of these cameras, in combination with a zoom lens makes them much more transportable than a traditional camera of comparable quality. The price of high quality digital cameras is dropping quickly and should be in the $200 range soon.”

    Download PDF

  • The Story of One Family’s Gift to MIT

    Source: MIT Sloan Alumni Magazine Published: Mar 2000

    In 1999, George and Sandi Schussel gave a gift to MIT to further the education of America’s managers. This is the story that led to the establishment of the Schussel Chair at MIT.

    Download PDF

  • Rockart named to Schussel Chair

    Source: ROI, MIT Sloan alumni magazine Published: Mar 2000

    An innovative contributor and a loyal MIT Sloan supporter for 34 years, this article announces that John F. (Jack) Rockart has been named the George and Sandra Schussel Distinguished Senior Lecturer of Information Technology at MIT.

    In response to his new appointment, Rockart said, “The chair which the Schussels have provided for the Sloan School will enable us to expand our efforts in the area of information technology, a field in which Sloan is consistently ranked number one in the polls, including those by U.S. News & World Report and Business Week.”

    Download PDF

  • Schussel Sees Database Architecture As Key Issue

    Source: Newsbytes News Published: Apr 1998

    Enterprise architectures in the coming years will be multi- tiered, based on messaging not transaction processing, and built on software components, according to George Schussel, chairman of Digital Consulting Inc. In this environment, choosing the right database architecture will be critical, Schussel said at the Database and Client/Server World conference sponsored by his Andover, Massachusetts- based company.

    Many people don’t understand database architecture issues or care about them, Schussel said, but “by not paying attention to this area, you’re going to get into trouble.” The reason is that there are several different approaches to dealing with both relational and object oriented database capabilities, and which one is most suitable depends on what you are doing, he said.

    Schussel broke database vendors into four groups. The first are offering true universal servers, combining object and relational capabilities around a relational core but not supporting all the features of true object orientation. The main examples are Informix and IBM, he said. Oracle Corp. is in a separate group, Schussel said, because while “Oracle’s marketing department has introduced a universal server,” what the company offers is really a relational DBMS with five complex embedded data types added to it. “What you’re dealing with is a middleware solution,” Schussel maintained, and that is inherently slower.

    Download PDF

  • CEO of Digital Consulting Institute plays role of coach

    Source: Eagle-Tribune Published: Mar 1998
    Written for the Eagle Tribune, this article provides a history of DCI; how it was started and its evolution into web communities. A retrospective on how George Schussel has operated the company and some of the future vision for a major IT trade show enterprise.

    Download PDF

  • AlphaBlox Web-based Data Analysis Product

    Source: InfoWorld.com   Published: Dec 1997

    George Schussel reviews new AlphaBlox Web-based Data Analysis Product. Start-up company AlphaBlox details a Java based application development system built on components at DCI’s Client-Server and Database World conference. The vendor in early 1998 will unveil a product called AlphaBlox Enlighten, for building customizable data-analysis applications, such as sales analysis, customer retention, and risk analysis. The intent is to enable both Web-based deployment and administration. Elements of Enlighten include Java-based components called Ready-to-Use Building Blocks; InterBlox, an application assembly framework; and BASE (Blox Application Server Environment), a server environment for distributing applications across the Web. “What they have is an n-tier architecture, with a Web application server that is developed in Java and produces Java code for interfacing with a back-end database for running applications,” said analyst George Schussel, of DCI Software Conferences & Expositions, in Andover, Mass. “The conclusion is, they are the first company that I know of to build a complete Java-based multitier application server product,” Schussel said.

    Download PDF

  • Client/Server, from dBASE to JAVA: The Next Step

    Source: George Schussel Published: Nov 1997

    In this article, Schussel explains how client/server is evolving to support the emerging Internet paradigm. He discusses how the Internet is itself a 3 to N-tier client/server network. Well known Internet applications like Gopher, FTP (File Transfer Protocol), the World Wide Web and SMTP (Internet mail) are client/server in architecture. Schussel discusses first and second generation web applications.

    In 1996 he predicted that: As you think about the future of the Internet it’s likely to have the following characteristics:

    1.TV quality video and CD quality audio transmittable over HTTP and interpreted by your browser, regardless of operating system.

    2. Browsers have become the next “killer application”, joining spreadsheets and word processors. In addition, however, all desktop applications will get browser/Internet buttons.

    3. The Internet itself becomes a universal and ubiquitous broadband digital dialtone. Cable companies, telephone companies and wireless operators all compete to provide that dialtone.

    Download PDF

  • Tales from the Java shops

    Source: Application Development Trends Published: Oct 1997
    The emerging popularity of Java based development is chronicled here. Distributed computing in this environment is described by George Schussel, chairman and CEO of Digital Consulting Inc. He describes the emerging architecture as “browser/server computing.” The connecting middleware could comprise Corba or DCOM object technology. In fact, more and more tools vendors are citing Java-enabled Corba in their pitches. Such concepts had been gaining currency in client/server circles in recent years. Thus, with object middleware connecting tiers, the browser/server design crew may be taking up where the client/server crew left off.

    Download PDF

  • Interview with DCI’s CEO, George Schussel

    Source: Information Builders News Published: Sep 1997

    A wide ranging interview discussing topics such has client/server computing, Internet computing, middleware and application architectures. History, current status and future evolutions are all discussed.

    Download PDF

  • Will that be NC or PC? Rivals duke it out in debate

    Source: Computing Canada Published: Apr 1997

    TORONTO — It was billed as the great debate, and those looking for insight on what the future has in store for client-server witnessed, if nothing else, a little mud-slinging here between some of the industry’s main players. The scene was DCI’s Database and Client-Server World Conference. The panelists were representatives from IBM Corp., Sybase Inc., Netscape Communications Canada, Microsoft Corp. and Oracle Corp.

    And the questions? George Schussel, panel moderator and CEO of DCI, jumped to the chase early by asking: has the PC become too complicated and too expensive to maintain?

    The answers quickly narrowed the debate as one between the personal computer, fat with applications and files, and the network computer, a thin client with minimal memory which leaves control of applications with a network server and draws them when needed.

    “I’m obviously the designated defender of the NC today,” responded Chuck Rozwat, senior vice president of Oracle Corp.’s database server division. “The PC is here to stay … but you’re going to pay for it.”

    Download PDF

  • Debate on the network computer

    Source: DCIs Database and Client-Server World Conference Published: Apr 1997

    It was billed as the great debate, and those looking for insight on what the future has in store for client-server witnessed, if nothing else, a little mud-slinging here between some of the industry’s main players.

    George Schussel, panel moderator and CEO of DCI, jumped to the chase early by asking: has the PC become too complicated and too expensive to maintain?

    “I’m obviously the designated defender of the NC today,” responded Chuck Rozwat, senior vice-president of Oracle Corp.’s database server division. “The PC is here to stay … but you’re going to pay for it.”

    Rozwat was referring, of course, to the area of PC maintenance, where logic holds that simple costs less, complex costs more.

    Robert Epstein, executive vice-president of Sybase Inc., who played devil’s advocate for most of the session, said the maintenance problem of having too many applications on a PC isn’t so much a technology problem but an issue of self-discipline.

    “The cost of deploying an app should be as close to zero as possible,” said Epstein, adding that after you deploy it “you should then have the self-discipline to leave it alone.”

    At the same time, Epstein turned to Norm Judah, director of enterprise program management at Microsoft Corp., and said: “I think we all agree that the cost of PCs is too expensive with regards to the software,” referring specifically to each, more memory-hungry and costly generation of operating system that Microsoft launches.

    Download PDF

  • Debate Focuses On NC Merits

    Source: Newsbytes News Published: Apr 1997

    In the press and speakers’ room before the session billed as “The Great Debate: Has The Internet Killed Client/Server?” George Schussel, whose Digital Consulting Inc. runs the Database and Client/Server World conference going on this week, was overheard telling some panelists that “The point of this debate is just to have….fun.” They did that, though they didn’t talk very much about the impact of the Internet on client/server.

    The session concentrated instead on an admittedly related topic: the relative merits of network computers (NCs) versus conventional personal computers. With a panel including participants from Microsoft Corp., Oracle Corp., and Netscape Communications Corp. — all of which have strong positions on this subject — the discussion was bound to get entertaining.

    The exchange that got the biggest audience reaction started when Schussel, acting as moderator, asked “What value will a used NC be?” Norm Judah, director of enterprise program management at Microsoft, replied: “A doorstop?” And Netscape Vice-President Kevin Fitzgerald, picking up on the theme of PC obsolescence that ran through the debate, retorted that “It’ll be a cheaper one.”

    Download PDF

  • The new browser-server architecture

    Source: Computing Canada Published: Mar 1997

    A new architecture that might be called browser-server computing is emerging. It will not entirely replace the client-server model we know today, but it will probably at least influence the way most client-server systems work.

    There are plenty of unanswered questions about this new model, but companies should start to explore it, according to George Schussel, founder and chief executive of Digital Consulting Inc. in Andover, Mass.

    Speaking at his company’s Internet World conference in Toronto last fall, Schussel said using Internet protocols to link clients to servers makes good sense in several ways. It cuts costs. It makes it easier to support a mix of client platforms, because the application need only be concerned with the Web browser.

    Download PDF

  • Client/Server Software Architectures–An Overview

    Source: Carnegie Mellon Software Engineering Institute Published: Jan 1997
    A defining white paper from the Carnegie Mellon Software Engineering Institute on the subject of client/server architectures. The paper is a summary of papers, lectures, and works performed by George Schussel and Herb Edelstein.

    Download PDF

  • Proceed Cautiously To Browser / Server

    Source: Newsbytes News Published: Oct 1996

    A new computing architecture that might be called “browser/server computing” is emerging. It will not replace the client/server model we know today, and it is still untested and may not work very well, but forward-thinking companies should be exploring it cautiously, said George Schussel, founder and chief executive of Digital Consulting Inc., during his company’s Internet Expo conference in Toronto Tuesday.

    Delivering a keynote address, the Andover, Massachusetts-based consultant said using Internet technology as the protocol linking clients to servers makes good sense in a number of ways. It reduces cost, and it simplifies dealings with a variety of hardware clients because the application need only be concerned with the Web browser, which is relatively standardized.

    Schussel traced the history of client/server computing back to the file server model, in which virtually all work was done on the client, while the server was just a place to store data in files that had to be downloaded to the client for use. This led to the two-tier client/server model, which put a proper database server with record locking and such features in place of the simple file server.

    Then in the past two years came three-tier client/server, using an application server separate from the database server and relying on the client only for the presentation logic.

    Schussel said the browser/server model closely resembles this third model, except that the protocol used to link client and servers is Internet technology — either the public Internet or an internal intranet.

    Download PDF

  • Client-Server, from dBASE to JAVA

    Source: George Schussel   Published: Mar 1996
    It was the purpose of this article to explain how a “Client/Server” architecture is really a fundamental enabling approach that provides a flexible approach for migrating to new technologies like Web enabled computing. The old paradigm of host centric, time shared computing had given way to the new client/server approach, which was message based and modular. The article discusses file server, 2-tier client/server, and various forms of 3-tier client/server approaches. Distributed components and warehouse approaches are also discussed.
    Download PDF

  • Early approaches to Distributed Applications

    Source: George Schussel Published: Jan 1996

    George Schussel provides a short discussion of the following approaches to computing architectures:

    (1) Mainframe architecture: all intelligence is within the central host computer. Users interact with the host through a terminal that captures keystrokes and sends that information to the host. Mainframe software architectures are not tied to a hardware platform. User interaction can be done using PCs and UNIX workstations.

    (2) File-sharing architecture: The original PC networks were based on file sharing architectures, where the server downloads files from the shared location to the desktop environment. File sharing architectures work if shared usage is low, update contention is low, and the volume of data to be transferred is low.

    (3) Client/server approaches: PCs are now being used in client/server architectures. This approach introduced a database server to replace the file server. Using a relational database system (RDBS), user queries could be answered directly. The client/server architecture reduced network traffic by providing a query response rather than total file transfer. It improved multi-user updating through a GUI front end to a shared database. In client/server architectures, Remote Procedure Calls (RPCs) or standard query language (SQL) statements are typically used to communicate between the client and the server.

    Download PDF

  • The Foundation for Downsizing: Distributed and Client / Server Database Management Systems (DBMS)

    Source: White paper for Database World attendees Published: Jun 1995

    This white paper by George Schussel provides a comprehensive overview of the what, why, and how of distributed and client/server computing. It explains the benefits of this technology and its usefulness in both decision support and transaction processing environments. Client/server and distributed DBMS technologies are related, but not identical. This report explains the differences between the two approaches.

    Other topics covered include a criteria list for describing the functionality expected of a true distributed DBMS. Some advanced levels of functionality are described as well. Examples of successful case studies are used to prove the point that client/server and distributed DBMS technology is definitely ready for serious transaction processing workloads.

    The report concludes with a list of advisories and cautions for MIS managers preparing to take the client/server route.

    Download PDF

  • Convergent View column – Which Language Should You Speak?

    Source: Client/Server Today Magazine Published: Apr 1995

    The diversity of possible products supported by client/server computing offered the developer a myriad of choices for application development. Schussel delves into the product choice conundrum and offers predictions on which are best for various purposes.

    Download PDF

  • DATABASE & CLIENT/SERVER WORLD is boldly going where no show has gone before

    Source: Business Wire Published: Mar 1995

    Digital Consulting Inc., the producer of DATABASE & CLIENT/SERVER WORLD Boston, the largest database and client/server conference and exposition, June 13-15, 1995, at the Hynes Convention Center, has addressed the growing interplanetary needs of the galaxy by implementing DATABASE & CLIENT/SERVER WORLD, the 2ND Generation.

    This event will feature over 200 of the brightest stars in the industry, 900 exhibits, and over 25,000 I/S professionals, making it the largest event in the client/server generation. Dr. Michael Hammer of Hammer and Co. and John Thompson of the IBM Corp. will launch a star-studded cast of database and client/server experts unmatched anywhere. George Schussel, founder of DCI, is the chairman of the World event.

    New to this year’s voyage will be the first publicly announced Computerworld Client/Server Journal “Top 25” Client/Server Effectiveness Awards. During a special panel session, Computerworld Client/Server Journal and Cambridge Technology Partners will identify the 25 most effective corporate users of client/server technology in North America and discuss the selection methodology and reveal how the “Top 25” made client/server work for them.

    Download PDF

  • Convergent View column – Win 95: I’m looking at the Bigger Picture

    Source: Client/Server Today Magazine Published: Mar 1995

    Schussel explains the impact that the coming Windows 95 release is going to have on the computer industry. Anyone who followed advice given in the column made a bundle of money on Microsoft stock. Schussel says “Once it arrives, Windows 95 will sweep the industry like a tornado, unlike anything before.”

    Download PDF

  • A Generalized Way Of Thinking About N-Tier Client/Server Architectures

    Source: DCI’s Database and Client/Server World   Published: Mar 1995

    This white paper explains the evolution of computing architectures from file server approaches like dBASE through 2-tier and 3-tier client/server computing. The paper establishes an approach for explaining and understanding n-tier client/server computing approaches, which have become very popular in the Internet era. The paper’s discussion of transaction monitors foreshadowed the evolution of web server technologies. (Privately published to attendees of DCI’s Database and Client/Server World, 1995.)
    Download PDF

  • Convergent View column – Survival Talk about NextStep and SQL

    Source: Client/Server Today Magazine Published: Feb 1995

    Before he became famous (again) with his return to Apple and purchase of Pixar Animation, Steve Jobs was the CEO of NextStep. NextStep was a leader in the object oriented application development business. This column covers what Jobs was doing. In the second part of the column, Schussel discusses some interesting opinions by Chris Date, well know guru of the relational database movement.

    Download PDF

  • Convergent View column – Paramount, Avis Saw the Light

    Source: Client/Server Today Magazine Published: Jan 1995

    Two cases, Paramount Pictures and Avis Rent a Car, are described by Schussel.

    Download PDF

  • Convergent View column – Of Databases and Diversity

    Source: Client/Server Today Magazine Published: Dec 1994

    Like putting together a component hi-fi system, the client/server architecture allows for a choice of products and using a ‘best of breed’ approach. The column explains and offers examples12

    Download PDF

  • DATABASE REPLICATION: PLAYING BOTH ENDS AGAINST THE MIDDLEWARE

    Source: Client/Server Today Magazine Published: Nov 1994

    In this second of a two part series, Decision Support and Transaction Processing are identified as the two primary categories of computing in which database support is required. The use of replication servers is analyzed for both categories by George Schussel. Different technologies, including a detailed look at IBM’s approaches are discussed. The benefits and problems associated with using replication services are analyzed.

    Download PDF

  • Convergent View column on Advantages to the Client/Server Architecture

    Source: Client/Server Today Magazine Published: Nov 1994

    Schussel gives a summary overview of the history and rationale for client/server computing.

    Download PDF

  • DATABASE REPLICATION: WATCH THE DATA FLY

    Source: Client/Server Today Magazine Published: Oct 1994

    This is the first of a two-part series by George Schussel on database replication. This first article focuses on the overall importance and benefits of replication and provides an introduction to the two forms of replication services. In the second part, published the following month, differences between two forms of replication are examined in detail.

    Motivation for this article came from the fact that as distributed operational applications were becoming more widely used across large enterprises, pressure was increasing to maintain local copies of key corporate data in order to provide better response time for local queries. As a result, corporate databases-or at least the data residing in those databases-had to migrate out from the secure and highly optimized sanctuary of the glass house into the distributed and frequently chaotic world of open systems.

    Download PDF

  • REPLICATION, THE NEXT GENERATION OF DISTRIBUTED DATABASE TECHNOLOGY

    Source: George Schussel research paper Published: Jun 1994

    This is a major research paper by George Schussel on the technique of replication as a method for supporting distribution of data to multiple sites. Some comments from the introduction to that paper follow:

    Replication, or the copying of data in databases to multiple locations to support distributed applications, is an important new tool for businesses in building competitive service advantages. New replicator facilities from several vendors are making this technology much more useful and practical than it’s been in the past. In this article we will go into enough detail on replication for the reader to understand the importance of replication, its benefits and some of the related technical issues.

    Buying trends today clearly indicate that companies want their applications to be open and distributed closer to the line of business. This means that databases supporting those companies have to migrate to this same open, distributed world. As distributed operational applications become more widely used across large enterprises there is going to be a requirement for increasing numbers of data copies to support timely local response. This is because the propagation uncertainties and costs associated with real time networks and/or distributed DBMS solutions are a headache to deal with.

    Replication provides users with their own local copies of data. These local, updatable data copies can support increased localized processing, reduced network traffic, easy scalability and cheaper approaches for distributed, non-stop processing.

    While replication or data copying can clearly provide users with local and therefore much quicker access to data, the challenge is to provide these copies to users so that the overall systems operate with the same integrity and management capacity that is available with a monolithic, central model. For example, if the same inventory records exist on two different systems in two different locations, say New York and Chicago, the system needs to insure that the same product isn’t sold to two separate customers. Replication is the best current solution for many applications

    Download PDF

  • Replication, the next Generation

    Source: Georges Quarterly Published: Jun 1994

    A technical discussion by George Schussel of issues with database replication. The analysis covers peer to peer and master/slave architectures. There is a discussion of how collisions are handled. Fault tolerance is discussed as is replication for transaction processing environments. Both synchronous and asynchronous approaches are discussed. The requirements for a non-stop approach are covered.

    Download PDF

  • Parting is painful, but legacy apps can\’t always be given a good home

    Source: Computing Canada Published: Mar 1994

    Is it possible to evolve of mainframe applications to client-server? And even if you could, should you? For the past three years, Dr. George Schussel of Digital Consulting Inc. has run scores of seminars aimed at companies wanting to downsize legacy applications from the mainframe to client-server platforms.

    In his Downsizing Journal, Schussel lists several aspects that need to be changed in order to evolve old mainframe applications to the new technologies. First, the application architecture must be changed because applications will be primarily on desktops and data on shared servers. All of the major client-server DBMSs provide capabilities for stored procedures and triggers. Stored procedures of precompiled code that can be called by the application running on the client significantly reduce network traffic. Triggers are routines that are automatically executed as the database reaches predefined conditions. Stored procedures and triggers are available with some host environments, but very few of the old legacy applications used them. Finally, applications must be re-architected to take advantage of the graphical user interface (GUI).

    In the light of all this, is it feasible to evolve old mainframe applications to client-server? Schussel unequivocally says not. Even if it were reasonable and possible to port old applications to the new technologies, would you really want to? Many legacy applications currently running on mainframes were designed and developed 15 to 20 years ago. The major goal of downsizing that we have heard from customers is to replace old applications that aren’t delivering the functionality they want, and haven’t for years.

    Porting these systems to a downsized UNIX platform not only doesn’t make them open or client-server, but doesn’t do anything to change the old inadequate functionality. So which part of the investment in legacy systems can be saved? Clearly, it must be that legacy data. Or must it?

    Download PDF

  • 24 October, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Oct 1993

    • Technologies for business re-engineering, part II
    • Past and future of downsizing by George Schussel

    Download PDF

  • 22 August, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Aug 1993

    • Report from Database and Client/Server World, part I
    • Succeeding with Lotus Notes
    • Open operating systems
    • What’s the object of your database, part II, by George Schussel

    Download PDF

  • 21 July, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Jul 1993

    • Report from Software World
    • What’s the object of your database, part I, by George Schussel
    • Downsizing in the textile industry

    Download PDF

  • 20 June, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Jun 1993

    • Doing client/server right the first time, part II by George Schussel
    • The latest on operating system wars
    • CASE is dead; long live CASE

    Download PDF

  • 19 May, 1993 issue topics

    Source: Schussels Downsizing Journal Published: May 1993

    • Doing client/server right the first time, part I by George Schussel
    • Rules for downsizing
    • Operating system wars

    Download PDF

  • Rightsizing: Time Waits for no Technology

    Source: Netware Connection Published: May 1993

    Rightsizing, downsizing, smartsizing, resizing-you can’t open a network trade magazine these days without running across one or the other of these trendy buzzwords. So what is this phenomenon all about?

    Several technologies that arose in the 1980s paved the way for the migration from mainframes to PC networks. Micro-mainframe links, or cards, that allowed PCs to act as dumb terminals, and PC-based mainframe language compilers that allowed you to write mainframe applications on the PC were just a few. Dr. George Schussel, founder of 11-year-old Digital Consulting Inc., picked up on these and other indicators and, in 1989, initiated the biannual Downsizing Expo, the first and only trade show dedicated to this business issue.

    “Porting applications from a mainframe to a PC is a very natural trend,” says Schussel, who noticed in the latter part of the 1980s that a few avant-garde firms were porting applications from the mainframe to the PC environment. He recalled a few of these rare cases of rightsizing: “One that I visited that I found interesting was Echlin Manufacturing, out of Connecticut. They had rebuilt all of their corporate systems for a PC LAN. It saved a huge amount of budget money. You began to hear about rare kinds of companies doing things like that; it was stunning in each case. [For example,] Turner Construction, out of New York City, got rid of all their mainframes and minis and ran their entire company on 3,000 PCs, People began to realize you can do it.”

    Schussel cites the advent of client/server computing, which was introduced by the Sybase Corporation, as fundamental to the climate for change. While client-server computing was slow to get off the ground due to the lack of desktop standards, the emergence of Windows 3.0 changed all that. “When Windows 3.0 hit the market, it became clear to me instantly. . . . I went out on the lecture circuit and began telling people the battle is over; Windows is the standard. I got a lot of criticism back in 1990 and 1991 for saying that . . . but now no one disputes it.” The introduction of Windows 3.0, says Schussel, was an incentive for tools manufacturers that make high-level application development languages to develop tools for building client-side applications for Windows.

    Download PDF

  • Getting IT Together

    Source: CRM Magazine Published: May 1993

    Integration is a four-letter word in the world of CRM, but it doesn’t have to be. What follows are the real issues behind integrating CRM solutions with an enterprise’s existing systems, and how to simplify what could otherwise grind CRM initiatives to a halt. With phrases like real-time enterprise, 360-degree view of the customer, and single instance of truth floating around the business world, there seems to be a lot more talk than actual substance to many of the claims that companies can access ERP, CRM, and other systems instantly from one interface. But integration can enable this to happen, says George Schussel, founder of DCI. “Integration is the backbone of where things are going, moving towards the realtime enterprise,” he says. “How a company accomplishes this can take a number of directions,” including looking at integration from the application layer, a business processes layer, or at the data layer.

    Download PDF

  • 18 April, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Apr 1993

    • Point – counterpoint on hot issues
    • Life and times of Information Builders
    • A note on repositories by George Schussel

    Download PDF

  • 17 March, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Mar 1993

    • Visit to IBM Santa Teresa, part III, by George Schussel
    • For laptop junkies
    • This IBM is for you

    Download PDF

  • 16 February, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Feb 1993

    • Visit to IBM Santa Teresa, part II, by George Schussel
    • Novell, UNIX and the new world order
    • The last thing IBM did right
    • Event driven client/server development, part I

    Download PDF

  • 15 January, 1993 issue topics

    Source: Schussels Downsizing Journal Published: Jan 1993

    • Visit to IBM Santa Teresa, part I, by George Schussel
    • Downsizing at Motorola
    • What’s wrong with Borland?
    • Laptops – the beat goes on

    Download PDF

  • 14 December, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Dec 1992

    • Distributed & client/server DBMS, part IV, by George Schussel
    • Windows based tools
    • How to cost MIPS
    • The latest generation of notebook computer

    Download PDF

  • 13 November, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Nov 1992

    • Distributed & client/server DBMS, part III, by George Schussel
    • A visit to IBM Pacific Employees Credit Union
    • Migrating DB2 applications, part II

    Download PDF

  • Downsizing Applications from Mainframes to Client/Server

    Source: Software World Digest Published: Nov 1992

    George Schussel, Chairman of Database World reviews the issues in migrating mainframe applications to the client/server world. Several caselets are discussed. Some text from the article follows:

    Let’s first look at the technical process involved in building a client/server approach. Here, we confront a very different architecture- one that places applications on desktops and data on shared servers. Jim Davey, Senior Consultant for DCI, has been developing a new analysis approach for the client/server environment (an article from Jim is in the works). Regardless of the analysis approach used, it is clear that the native I/O code that once resided in the application will now be removed and/or executed on a different computer. So the first step in a conversion to client/server computing is to re-analyze data I/O and recode to accommodate any necessary changes.

    Download PDF

  • 12 October, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Oct 1992

    • Distributed & client/server DBMS, part II, by George Schussel
    • Data quality and downsizing
    • Schussel convinces elephants to downsize
    • Downsizing or Rightsizing?

    Download PDF

  • 11 September, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Sep 1992

    • Distributed & client/server DBMS, part I, by George Schussel
    • Downsizing DB2 applications
    • Ken Olsen & DEC
    • Collaborative distributed application development

    Download PDF

  • Distributed and Client/Server DBMS: Underpinning for Downsizing

    Source: George Schussel and Stacey Griffin Published: Aug 1992

    In this white paper, Schussel and Griffin give a comprehensive overview of distributed databases, client/server computing and downsizing. In particular, a detailed discussion of the difference between distributed databases and client/server computing database servers is provided. Functional requirements for DBMS are discussed. A detailed presentation of IBM technologies is also presented. The evolution of these different distributed computing architectures is also analyzed.

    Download PDF

  • 10 July – August, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Jul 1992

    • Sybase & Novell, database ninjas or odd couple?
    • Which is mightier – the pen or the keyboard?
    • A look at Foxboro’s downsizing
    • Downsizing EXPO, San Francisco, by George Schussel

    Download PDF

  • 09 June, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Jun 1992

    • Novell’s NetWare and its future
    • What will happen to the dBase market?
    • Battle of the 90’s, part II, by George Schussel

    Download PDF

  • DISTRIBUTED AND CLIENT-SERVER DBMS: UNDERPINNING FOR DOWNSIZING

    Source: White Paper by George Schussel Published: Jun 1992

    This paper explains in some detail the difference between client/server DBMS and distributed DBMS, a point of interest to technicians implementing wide area databases.

    One of the key trends of modern computing is the downsizing and distributing of applications. This is happening because companies want to take advantage of modem microprocessor technology and at the same time gain the benefits of new styles of software using graphical interfaces (GUI). Client/server and distributed database technologies are fundamental enabling technologies for downsizing.

    Client/server approaches allow the distributing of applications over multiple computers, with the databases residing on server machines, while applications run on client computers, usually PC’s. A local area network (LAN) provides the connection and transport protocol to connect the clients and servers.

    Distributed databases offer capabilities similar to client/server databases, except more so. More so in the sense that a database management system (DBMS) resides on each node of the network and allows transparent access to data anywhere on the network, without requiring the user to physically navigate to the data. Many of the advanced functions described later in this chapter such as stored procedures and triggers and two phase commit are available in both client/server and distributed DBMS.
    Download PDF

  • 08 May, 1992 issue topics

    Source: Schussels Downsizing Journal Published: May 1992

    • Microsoft watch: the next chapter
    • Battle of the 90s, by George Schussel
    • Downsizing: what’s really happening

    Download PDF

  • 07 April, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Apr 1992

    • A diary from Japan by George Schussel
    • Converting existing applications to client/server
    • Downsizing: what’s really going on

    Download PDF

  • 06 March, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Mar 1992

    • How to evaluate client/server systems II, by George Schussel
    • Microsoft Watch
    • Downsizing around the world
    • A look at the new programming

    Download PDF

  • Mainframe Moaners

    Source: Computing Canada Published: Mar 1992

    Speaking at a conference in Toronto, George Schussel said “At many downsizing conferences and seminars, I have had opportunities to meet IS staffers dealing with the issue of downsizing. Among this audience, there is typically a group of old friends: the mainframe bigots. These folks are absolutely against any movement towards distributing and downsizing applications; they love their “Big Iron.”

    “I have compiled a list of the basic excuses I hear muttered time and time again by mainframe enthusiasts. So, when the time comes, with this list, you will be armed with informed arguments powerful enough to overcome any bigot’s excuse.”

    “Excuses one and two: PCs during the day are turned on and off while mainframes aren’t; and, PC MIPS just aren’t comparable to mainframe MIPS. The first excuse is true for standalone, PC-based personal computing. However, downsizing is all about porting mainframe software and mainframe thinking onto PC networks. In the 1990s, the “personal” aspect will somewhat leave PCs. They will be left on constantly, and work will be allocated to the idle workstations from the network operating system.”

    “The second part of this objection, that PC MIPS aren’t the same as mainframe MIPS, is true but irrelevant. The comparison comes out so much in favor of the PC workstation that a few hundred per cent one way or the other won’t be noticeable in the real, day-to-day world.
    Download PDF

  • 05 February, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Feb 1992

    • How to evaluate client/server systems I, by George Schussel
    • Will you need a network operating system?
    • Favorite presentations
    • An interview with Borland

    Download PDF

  • The three LANs of our time: Microsoft, Banyan and Novell

    Source: Computing Canada Published: Feb 1992

    For a technology that began with a modest goal, it’s now apparent that the local-area network (LAN) operating system (O/S) is one of downsizing’s critical enabling technologies. The LAN O/S was originally created to function as a collection of utilities capable of sharing files and support services among PCs. As PC networks expanded, however, it became clear that networks, PCs, and servers had the capabilities necessary to replace mainframes.

    As a result, adequate software had to be created to allow task management and co-ordination across the network. The LAN O/S is now assuming this sophisticated role in managing network cooperative processing transactions.

    Prior to LAN O/Ss, the problem in recreating functionality of mainframe software system across networks and workstations was there was no PC or LAN equivalent to the full functionality of any mainframe software environment, with the exception of application development languages. In a mainframe environment, operating systems, transaction monitors, time-sharing monitors, database management systems and development languages are assembled in a co-ordinated fashion to complete the transaction processing functions. So, in order to write real-time, interactive, transaction processing systems, software developers must have an O/S that provides multi-user, multi-tasking, re-entrant and preemptive services. The question is, how do you proceed on a PC LAN if you want to create comparable mainframe O/S and transaction monitor functionality?

    The answer is the PC LAN O/S…..

    by George Schussel

    Download PDF

  • 04 January, 1992 issue topics

    Source: Schussels Downsizing Journal Published: Jan 1992

    • DeBoever’s report from the downsizing front
    • The year of the network, by George Schussel
    • Beware of flying mice!
    • Revelation Technologies

    Download PDF

  • 03 November-December, 1991 issue topics

    Source: Schussels Downsizing Journal Published: Nov 1991

    • Life with laptops by George Schussel
    • Why I can’t downsize
    • LAN operating systems
    • Borland – having it’s cake and eating it
    • Apple has a winner

    Download PDF

  • Downsizing has arrived; MIS forced to change in effort to cut costs

    Source: Software Magazine Published: Nov 1991

    Some call the process downsizing. Others label it rightsizing. Whatever the process is called, observers say, the age of network and distributed computing has arrived. The long-expected shift from time-shared mainframe-class computers to network-based computing with clients and servers has finally become a major industry trend during the 1990s.

    This comes as no surprise to many observers, who noted that MIS departments reached the point, during the late 1980s, where there was no choice but to cut costs. “Previous new technologies talked about increased capability; downsizing is the first technology I have seen that talks about cutting cost,” said George Schussel, president of Digital Consulting, Inc., Andover,

    Mass., during the recent DCI-sponsored Downsizing Expo in Anaheim, Calif.

    “The terms ‘downsizing’ and ‘open systems’ are almost synonymous,” contended Schussel. “It’s hard to imagine doing one without the other. In downsizing, hardware environments are open–DOS, OS/2, Novell Inc.’s NetWare and Banyan Systems Inc.’s Vines run on many brands of hardware. Software is open, too. There is a choice of different tool sets working on different databases. What is sold is not proprietary; it’s one’s own preference.”

    Download PDF

  • 02 October, 1991 issue topics

    Source: Schussels Downsizing Journal Published: Oct 1991

    • What’s wrong with IBM, by George Schussel
    • Will you still need a mainframe?
    • The time is right for downsizing
    • Parallel processing at NCR

    Download PDF

  • 01 September, 1991 issue topics

    Source: Schussels Downsizing Journal Published: Sep 1991

    • Borland and Ashton Tate
    • Downsizing Outlook by George Schussel
    • Apple & IBM, the winners and losers
    • Saving money with downsizing
    • Getting started with downsizing
    • Advice from Herb Edelstein

    Download PDF

  • George in Japan

    Source: Nikkei Computer Published: Sep 1991

    In 1991 George Schussel traveled to Japan with Eckhard Pfeiffer, Compaq CEO, to help open Compaq’s first office in Japan (Tokyo) and introduce the benefits of ‘downsizing’ to Japan’s computer users. Theses articles appeared in Nikkei Computer, and chronicled some of Schussel’s efforts. They’re in Kanji for our Japanese friends.

    Download PDF

  • GUI tools rolling out as enabling machines

    Source: Software Magazine   Published: Sep 1991

    Industrial-strength front-end tools are beginning to become available for client/server architectures. These tools take advantage of PC graphics, connect to many server databases and offer powerful object oriented capabilities to the professional application developer. “You can consider these products as replacements of the 4GLs of the ’80s,” said George Schussel, president of Digital Consulting, Inc., in Andover, Mass. “They’re graphically oriented code generators. They run under the dominant graphic standards, which are Windows 3.0, Presentation Manager, Motif and Open Look,” he added.

    The front-end tools are open, so they support multiple independent DBMSs, said Schussel. “They also support execution across the network in a client/server mode.”

    In addition to supporting shared repository and data dictionary environments across that network, Schussel said the front-end tools will be project-oriented for teams. “They will also support the debugging process in a windows environment,” said Schussel. Therefore, a developer could have one window running the program, other running the debugger, and another examining output from that program. He said they will also have an industrial-strength language with which a developer can write detailed procedures. “And of course they have to integrate with and manage the SQL environment,” Schussel added.

      Download PDF

  • Downsizing: A Review of the Enabling Technologies

    Source: American Programmer Published: Aug 1991

    By the summer of 1991, the idea of moving applications from mainframe computers to PC/LANs was starting to take hold. DCI’s Downsizing Expo and Conference played a major role in providing explanations and expertise to companies making this move. This article by George Schussel, appearing in the influential newspaper, American Programmer, discussed IT downsizing technologies. More than most other IT technologies, downsizing had the reality, immediacy, and statistics to be a “silver bullet”. Major benefits, from cost reduction to faster systems development and better running delivered systems, were realized by using downsizing techniques.

    Client/Server computing was an important element in the downsizing movement, and offered a whole series of important user benefits which the article covered.

    • Developers could use PCs instead of time-share terminals as a primary development platform.
    • Even though the PC was used as the principal computing platform, security, integrity, and recovery capability comparable to minicomputers was the result.
    • The efficiency of queries was optimized through use of the SQL language, greatly reducing the network communication load.
    • Gateway technologies allowed PC users to gain access to data located in mainframe and rninicomputer DBMS products such as DB2, IMS, and Rdb.
    • The client/server model isolated the data from the applications program in the design stage. This allowed a greater amount of flexibility in managing and expanding the database and also in adding new programs at the application level.
    • The client/server model was very scalable because, as requirements for more processing come up, more servers could be added in the network, or servers could be traded up to the latest generation of microprocessor.
    • A lot of flexibility came from a computing environment based on SQL, because SQL had been almost universally adopted as a standard. Commitment to an SQL server engine meant that most front-end 4GL, spreadsheet, word processing, and graphics tools would interface cleanly to the SQL engine.
    • Client/server computing provided the robust security, integrity, and database capabilities of minicomputer or mainframe architectures while allowing companies to build and run their applications on PC and minicomputer networks. This hardware/software combination often cut 90 percent of the costs of the hardware/software environment for building “industrial- strength” applications.

    Download the entire article for a comprehensive overview.

    Download PDF

  • George Schussel speaks on database technologies and vendors

    Source: Computing Canada Published: Aug 1991

    “In a display of perverse brilliance, Carl the repairman mistakes a room humidifier for a (PC) but manages to tie it into the network anyhow.” This cartoon caption summed up the tone of George Schussel’s keynote address at the Software World Conference and Exposition, held at the Metro Convention Centre.

    Schussel, a database guru and founder of Andover, Mass.-based Digital Consulting Inc., gave his audience a “quick, irreverent look” at database technologies and vendors, kicking off with a look at relational database management systems (RDBMSs). “The idea, the holy grail, that we can live with one type of database management system … wouldn’t that be great?” asked Schussel, pointing out, that this is not a realistic expectation.

    Because of the limitations of SQL (structured query language) — the de facto relational language — relational databases perform poorly in applications such as scientific, OLTP (on-line transaction processing), text-oriented and multimedia. Therefore, he said, we cannot eliminate the need for the old navigational databases.

    Converting to relational from a navigational database is a thankless task, according to Schussel: “If someone assigns you that job, quit or ask to be transferred. It’s just not a good way to go through life.”

    Download PDF

  • DISTRIBUTED DBMS: A WHITE PAPER by GEORGE SCHUSSEL

    Source: white paper Published: Jun 1991

    An in depth discussion of the technologies of and market for distributed database software by George Schussel. Some comments from the paper follow:

    Most of the original research on distributed database technology for relational systems took place at IBM Corporation in IBM‘s two principal California software laboratories, Almaden and Santa Theresa. The first widely discussed distributed relational experiment was a project called R-Star, developed within IBM‘s laboratories. It is because of IBM‘s early use of the word STAR in describing this technology that most vendors’ distributed database systems names have incorporated “STAR” in one form or in another.

    There are three broad segments to the market

    1. True distributed DBMS

    2. Distributed access (mote data access)

    3. Client Server

    Distributed access can properly be thought of as a subset of technologies that are being delivered by those vendors selling true distributed DBMS or client server DBMS technologies. The goal of distributed access is to provide gateways for access to data that is not local. The demand, of course, is greatest for the most popular mainframe file and database environments such as IMS, DB2, VSAM, and Rdb. Local DBMS capability is not a requirement for distributed access. Most vendors provide a piece of software known as a requestor to be run in the client side of the RDA environment. Some of the products in this market are not finished gateways but toolkits for users to build their own custom gateways.

    Download PDF

  • Distributed DBMS decisions: Will you go with a client/server DBMS or a true distributed DBMS?

    Source: Computerworld Published: May 1991

    In this 1991 article, George Schussel defined the difference between client/server DBMS and distributed DBMS. Some comments from the article are:

    The difference between true distributed DBMSs and client/server DBMSs is in the concept of location transparency. With location transparency, a program running at any node need not know the physical location of the computer in which the requested data resides. True distributed DBMSs support location transparency, with each separate physical node in the network running a copy of the DBMS and associated data dictionary. It is the true distributed DBMS’ responsibility to determine an access strategy to that data.

    Distributed DBMS software, of which client/server DBMSs and true distributed DBMSs are a part, has to provide all the functionality of multiuser mainframe database software and allow the data in the database to exist on a number of different but physically connected computers.

    Distributed DBMSs should have the following functions:

    • Data integrity through automatically locking records and rolling back partially complete transactions.
    • The ability to attack deadlocks, automatically recovering completed transactions in the event of system failure.
    • The ability to optimize data access for a wide variety of application demands.
    • Specialized I/O handling and space management techniques to ensure fast and stable transaction throughput.
    • Full database security and administration utilities.

    Download the entire article for a comprehensive overview.

    Download PDF

  • Letter to Editor predicting triumph of Windows over OS/2

    Source: SAA Age Published: Apr 1991

    In 1991 IBM and Microsoft had started to compete aggressively with each other over the future of the PC. IBM was touting its OS/2, a new and full function operating system for its latest generation of PC’s. Microsoft was pushing Windows 3.0, its add on to DOS. Windows was easier to install and use, but less capable than OS/2. This competition, with Microsoft winning, defined the computer industry for the remainder of the 90’s decade. George Schussel, a user of both DOS/Windows and OS/2 had the vision to correctly predict the outcome of this operating system war.

    Some interesting quotes from the open letter by Schussel are:

    • It appears that perhaps you were wearing your Big Blue sunshades when this article was written! The overall tone of your article was that OSD and Presentation Manager represent the true religion and that Windows 3.0 is an annoying diversion. I got the distinct impression that you were advising SAA Age readers not to stray from the true path of OS/2 and Presentation Manager.

    • I’d like to take the opportunity in this letter to disagree with what I think are your conclusions and, to point out why the vast majority of users are, in fact, choosing Windows over Presentation Manager and will continue to do so in the future.

    • If a finger must be pointed, it should be at IBM for continuing a hard sell of a product the users don’t seem to want. For well over 90% of all PC users, a migration to the DOS Windows 3.0 environment is much more natural than moving to OS/2 and Presentation Manager. After all, given the choice, no one really wants to move from an operating system whose documentation is one inch thick (DOS) to an operating system (OS12) whose documentation is five feet thick!

    • Since 1987, IBM‘s attempts to position OSl2 as the replacement for DOS, have been
      stymied by the fact that the installation, support and hardware requirements for the OS/2-Presentation Manager combination are viewed as overwhelming by DOS platform users. A good example of this, Geoff – the other day I was speaking with the Senior Vice President of a local software firm who said that selling OS/2-based tools was analogous to trying to sell a dead dog as a pet! A little sick humor, but it does illustrate the temper of the user community.

    So, contrary to the tenor of your article, I believe that Microsoft is not and will not be very interested in future evolutionary developments in OS/2 Version 2 technology. I even believe that OS/2 Version 2, along with Version 1 as I previously stated, is a good candidate for becoming an abandoned operating system. If it doesn’t become abandoned or supplanted by Version 3, there’s a good chance that it will become IBM proprietary over time.

    Download PDF

  • Distributing: How To Take Advantage of the SQL Environment

    Source: 370/390 DATA BASE MANAGEMENT Published: Apr 1991

    George Schussel discusses how the emergence of SQL helped distributed computing become real. Comments from the article follow:

    • Distributed DBMS technology provides the highest level of services for supporting distributed processing. Specific advantages from the use of this technology include: 
    • As your processing needs grow, you can upgrade the hardware environment incrementally 
    • and as needed without throwing away your previous investments. 
    • By spreading the processing over many smaller machines, you take advantage of the downsized cost advantage that smaller machines hold over larger ones. 
    • The fact that a distributed DBMS offers support for replicated data can contribute mightily to satisfying requirements for high availability and fault tolerance. 
    • This same architecture is helpful for hardware maintenance because of its modularity. 
    • Distributed DBMS technology offers high performance SQL-based processing because its architecture takes advantage of parallel processing on many computers across a network. As a result, you can use relational processing for online transactions that might otherwise have been impractical on a single mainframe.

    Downsizing and distributing the computing environment are very real and practical approaches to getting more functionality at a lower cost. The availability of SQL has greatly accelerated the movement toward distributed and client/server DBMS environments. This has been principally because SQL has emerged as the standard data base language. The fact that SQL is a relational language and, therefore, supports set processing is also very helpful in a distributed environment. Distributed and client/server SQL DBMSs form the keystone in the migration to the distributed network-based parallel computing architectures of the 1990s.

    Download PDF

  • Distributing: How To Take Advantage of the SQL Environment

    Source: 370/390 DATA BASE MANAGEMENT Published: Apr 1991

    George Schussel discusses how the emergence of SQL helped distributed computing become real. Comments from the article follow:

    • Distributed DBMS technology provides the highest level of services for supporting distributed processing. Specific advantages from the use of this technology include:
    • As your processing needs grow, you can upgrade the hardware environment incrementally
    • and as needed without throwing away your previous investments.
    • By spreading the processing over many smaller machines, you take advantage of the downsized cost advantage that smaller machines hold over larger ones.
    • The fact that a distributed DBMS offers support for replicated data can contribute mightily to satisfying requirements for high availability and fault tolerance.
    • This same architecture is helpful for hardware maintenance because of its modularity.
    • Distributed DBMS technology offers high performance SQL-based processing because its architecture takes advantage of parallel processing on many computers across a network. As a result, you can use relational processing for online transactions that might otherwise have been impractical on a single mainframe.

    Downsizing and distributing the computing environment are very real and practical approaches to getting more functionality at a lower cost. The availability of SQL has greatly accelerated the movement toward distributed and client/server DBMS environments. This has been principally because SQL has emerged as the standard data base language. The fact that SQL is a relational language and, therefore, supports set processing is also very helpful in a distributed environment. Distributed and client/server SQL DBMSs form the keystone in the migration to the distributed network-based parallel computing architectures of the 1990s.

    Download PDF

  • Clash of the Titans

    Source: Oracle Magazine   Published: Apr 1991

    In this 1991 article George Schussel accurately predicts the preeminence of Microsoft with its Windows operating system. The article forecasts the comparative influence of IBM and Microsoft on the computer business for the 90’s and selects Microsoft as the winner. Some comments from the article:

    To describe Microsoft as a Titan in the same sentence as IBM may seem odd, since Microsoft has sales of $1 billion per year compared to IBM‘s $60 billion plus. However, in the PC world Microsoft’s influence is arguably greater than IBM‘s.

    The poor market reception for the partners’ OS/2 Presentation Manager operating system has been eclipsed by Microsoft’s success with MS-Windows 3.0. MSWindows 3.0’s interface is virtually identical to Presentation Manager and should later pick up much of OS/2’s technical capabilities in multitasking and multimedia. Three million copies of MS-Windows 3.0 were sold in 1990 alone. OS/2 sales will not reach that level until 1993. (Note that IBM and Microsoft jointly own OS/2, while Microsoft singly owns MS-Windows 3.0.)

    Download PDF

  • HOST BUSTERS! – A VISIT TO PACIFIC IBM EMPLOYEES FEDERAL CREDIT UNION

    Source: LAN Times Published: Mar 1991

    On his way to deliver the keynote at a downsizing conference, George Schussel took a side trip to visit Pacific IBM Employees Federal Credit Union (PACIBM) in San Jose. This visit to PACIBM, a mid sized, full service bank, allowed him to see a very successful example of downsizing a mainframe application to a PC/LAN environment. The project’s goals were very aggressive and successful. In particular the bank successfully downsized their toughest application first – the on line transaction processing backbone of banking operations. This article summarizes Schussel’s findings.

    Download PDF

  • Users Suggest Upbeat Future for Downsizing

    Source: Information Week Published: Jan 1991

    This news article is about an early 1991 trend setting conference with 400 attendees held in San Francisco on the topic of downsizing computer systems to the PC/LAN environment. A number of case studies were presented by MIS managers who had successfully moved to the PC/LAN environment. Typically, very large savings along with more satisfied users were the results, according to conference chairman George Schussel.

    Download PDF

  • Distributed DBMS: An Evaluation

    Source: white paper Published: Nov 1990

    This is a white paper prepared by George Schussel for Ashton Tate, the software company which invented the famous dBASE PC tool. While dBASE dominated the “data base” market on PC’s in the late 80’s, early 90’s, it was not a technology that could be evolved into a true DBMS engine. The company retained Schussel to help in complete an acquisition of software that was more capable for enterprise applications than dBASE.

    The paper starts off with:

    The market for modern distributed DBMS software started in 1987 with the announcement of INGRES-STAR, a distributed relational system from RTI of Alameda, California. Most of the original research on distributed database technology for relational systems took place at IBM Corporation in IBM‘s two principal California software laboratories, Almaden and Santa Theresa. The first widely discussed distributed relational experiment was a project called R-Star, developed within IBM‘s laboratories. It is because of IBM‘s early use of the word STAR m describing this technology that most vendors’ distributed database systems names have incorporated “STAR” in one form or in another.

    Today, the market for distributed DBMS is almost entirely based on the SQL language and extensions. (Principal exceptions being Computer Associates with its distributed DATACOM, and Fox Software with its newly announced Fox Server.)

    There are three broad segments to the market

    1. True distributed DBMS

    2. Distributed access (remote data access)

    3. Client Server

    Download PDF

  • The Promise and the Reality of AD/Cycle

    Source: Datamation Published: Nov 1990

    In the 1989 & 1990 timeframe analyst George Schussel frequently met with IBM developers to track and understand the promise of the new computer aided software engineering (CASE) technologies that IBM was developing. The umbrella term that IBM used for its CASE approach was AD/Cycle. An interesting standards approach, Schussel was not optimistic about its chance for success because of the complexity it represented. Ultimately AD/Cycle collapsed because of the new distributed client/server approaches that were more practical and cost effective.

    The first two paragraphs of this landmark article stated:

    IBM‘s AD/Cycle applications development platform is slowly emerging. Whether IBM‘s dream of using multiple vendors and tools will revolutionize software development or collapse under its own weight remains to be seen.

    Like any new software project, AD/Cycle is fraught with risk. IBM‘S integrated approach to computer aided software engineering (CASE) promises a dramatic improvement in productivity across the application development life cycle, but much of the technology is unproven and untested. The amount of up-front investment required by users is unknown, but large. Companies may invest millions of dollars in staff, software and hardware, only to find no significant improvement over more conventional development approaches using tools such as relational database managers and fourth-generation languages. But competitors may adopt AD/cycle and achieve significant success, thereby gaining a competitive business advantage.
    Download PDF

  • SAA REVOLUTION NEEDS TIME FOR EVOLUTION

    Source: SOFTWARE MAGAZINE Published: Oct 1990

    In the early 90’s, IBM made a major effort to create Systems Application Architecture (SAA) as an industry standard for software development. Ultimately, SAA was a failure due to the expense of underlying software and hardware platforms and the emergence of PC centric client/server computing as a dominant application architecture in the 90’s. In this early article, George Schussel offers some intelligent commentary on potential pitfalls of the SAA approach.

    A few comments from that article follow:

    The jury is still out on IBM‘s plan for providing application standards across platforms. But before the industry can fairly judge the SAA blueprint, it must have time to evolve. Current drawbacks, such as expense and lack of products, may not be an issue over time. With IBM‘s track record as an industry leader, organizations committed to Big Blue would be well advised to prepare the way for SAA migration.

    IBM‘s thinking about distributed applications has centered on data systems architecture and distributed SQL DBMS. While the concepts of distributed SQL are elegant and sophisticated, the delivery of these capabilities is now planned over the next five years. Client/server SQL solutions are simpler and might be deliverable earlier.

    IBM’s workstation standards are built around OS/2EE. This environment requires a large 80386 PS/2, which costs around $10,000. Since few existing PCs have enough power to run OS2/EE, this means substantial added outlays for most potential SAA users. DOS/Windows 3.0 has a GUI that is almost identical to OS/2’s Presentation Manager. Support for Windows 3.0 on clients would not require a migration away from DOS and would require less investment in hardware.

    Download PDF

  • George Schussel explains what IBM is after with its SAA strategy

    Source: Computerworld (Australia) Published: Oct 1990

    George Schussel was the Chairperson of the Database World Conference and Exposition held annually in Australia. Computerworld published this interview with Schussel which covers IBM’s then current SAA strategy. Some comments from the article follow:

    WHAT IS SAA? It is IBM‘s plan for providing application standards across three diverse platforms – PS/2, AS/400, System 370. Its goal is to provide a set of standards and products that allow a single application to run identically on any of these environments. In addition SAA products will allow a high level of connectivity and cooperative processing capability for applications running across these platforms.

    Is there anything new here? Many of the components of SAA are rather old stuff – products like Cobol, C, DB2, CICS, REXX, etc. On the other hand, there are some important and vital new concepts in SAA:
    Download PDF

  • Downsizing With Client/server Computing

    Source: Oracle Magazine Published: Jul 1990

    An early champion of client/server computing, George Schussel explains how this technology can improve benefits and lower costs at the same time. Excerpts follow:

    One of the best ways to downsize is by using the new generation of SQLbased client-server computing technologies from vendors such as Oracle, Sybase, Gupta and Novell. In the client-server model, the application is split between functions that execute on the client, a PC or workstation, and functions that run on the server, a multiuser data repository. Most application logic runs at the client desktop machine. When the application requires data, it generates the necessary SQL command and then passes high-level code to the communications facility. This facility then directs the SQL commands to the server, where the database request is executed.

    The idea of managing data on a separate machine fits well with the management approach of treating data as a corporate resource. In addition to executing the SQL statement, the server handles security and provides for concurrent access to the data by many queries.

    A benefit of using SQL client-server computing is that the hardware and software products supporting this approach are new and take advantage of the latest developments, such as application languages in a windowing environment. Another benefit is network efficiency. In traditional file-serving PC LAN approaches, the entire data file must be transmitted across a network to the client machine. With SQL as the basis for database management, this problem is resolved, since only the necessary query response data (a table) is transmitted to the client machine. SQL on the server also enables the implementation of advanced facilities, such as triggers and automatic procedures in the database.

    Download PDF

  • Development heading for a client-server future

    Source: Computing Canada   Published: Apr 1990

    Two software futurists put their views on the line in a debate over industry issues at

    Database ’90 held here earlier this month. If attendees expected a battle, they were disappointed as George Schussel, president of Andover, Mass.-based Digital Consulting Inc., and Jeff Tash, president and founder of Database Decisions in Newton Centre, Mass., generally agreed on most issues.

    The role of the mainframe will change in 1990s computing as more processing power is available on workstations, both predicted. “The word mainframe becomes obsolete,” said Schussel. “It really becomes a server.”

    Tash described two main computing environments he sees unfolding in the ’90s. The first is based on IBM’s Systems Application Architecture (SAA) and involves PS/2s hooked up to mainframes. The other approach will be Unix and its open systems concept which will probably consist of X-terminal workstations connected to RISC-based (reduced instruction set computing) Unix hosts, he said. The real difference, said Tash, is that with IBM’s strategy, the processing power will be on the desktop, while in the Unix world, it will reside on the backend.

    Both Schussel and Tash pointed to the Unix solution as the less expensive way to go in terms of cost per MIPS. But they indicated the real issue is software. “All hardware goes into museums; all software goes into production every night,” commented Tash, adding if SAA has an advantage over Unix it’s in the “huge installed base” of software already out there.

    Schussel said most companies will continue to run their mainframe software, even if they plan on moving to a Unix environment. “That’s your challenge,” he told his audience. “How do you maintain compatibility with the past yet develop with new technologies?”

    Download PDF

  • Interview with George Schussel

    Source: Data Base Newsletter Published: Mar 1990

    In the 1990 timeframe, the information systems industry was undergoing traumatic change as new technologies in the areas of distributed processing, database, and CASE (Computer Aided Software Engineering) were changing the way applications could be bult. Cooperative processing and client/server architectures presented new opportunities and challenges for system development and information planning. The CASE market was experiencing a major shift as a result of IBM’s announcement of AD/Cycle, a framework for integrated CASE technology. In the following interview, George Schussel, President of Digital Consulting, Inc., discussed the relationship between cooperative processing and client/server architecture, the impact client/server architecture was having on the distributed database market, and changes in the CASE market resulting from the AD/Cycle announcement.
    Download PDF

  • The IBM Effect

    Source: DATA BASED ADVISOR Published: Mar 1990

    George Schussel talks about the important impact that IBM has on computing standards in this in-depth cover story article. The article points how standards set by IBM have an influence far beyond IBM customers. And the most important standards are said to be those in software, as IBM is presented as a company more dominant in software than in hardware.

    A few comments from that article follow:

    IBM pays a lot of attention to software because: 1) software drives the sale of hardware, and 2) software has a higher growth rate than hardware. IBM‘s success in the software arena has a powerful impact on all users of PCs-whether or not the brand name on the console is IBM.

    For both PC and mainframe users, one of the most important emerging standards is the SQL database language. Although a number of different programming languages for implementing the relational model were developed in the 1970s and early 1980s, the IBM implementation, SQL, became the standard. This happened even though most relational database management system (DBMS) experts severely criticized SQL’s technical shortcomings in presentations and papers through much of the 1980s. SQL was adopted-as you’ll see-for two reasons: political expediency and recognition that any other DBMS language wouldn’t be supported by IBM-and thus couldn’t become a widely used standard.

    Download PDF

  • Plenty in store for the IBM repository

    Source: Computing Canada Published: Feb 1990

    When IBM Corp. announced its intention to enter the computer-aided software engineering (CASE) market last September, the move was based on a forthcoming set of products and an environment called Application Development/Cycle (AD/Cycle). Speculation about what this environment will involve, including a concept called Repository Manager, has kept the industry buzzing.

    “The big problem we have today is that IBM hasn’t dropped the other shoe yet — they told us what the framework’s going to look like, but they haven’t told us how it’s actually going to work,” said Ken Orr, a principal in the Ken Orr Institute. “And in fact, until they give us the data models we’re not going to know.” Speaking at The Repository Conference sponsored here last month by Digital Consulting Inc. of Andover, Mass., Orr described IBM’s software strategy as “a new generation of software,” a major component of which will be something called a repository. “There are a variety of bases on which (the repository) is a very important component in the software technology of the future,” he commented. “There are plans long-term for the repository to go into the execution and operation of data processing.”

    Speaker George Schussel, DCI president, described a repository as a “database of meta-data (data about data) — a database for all the things that are necessary to build programs in an application generation platform.” According to Schussel, a repository will offer the same capabilities as current databases — reusability, concurrency, multiple availability, management — but instead of applying them to an execution environment, the repository will operate in the development environment. It will contain things like object definitions, conceptual data models, process logic, screens and reports, and source code.

    Download PDF

  • IBMs strategy, a four-pronged approach

    Source: Computing Canada Published: Sep 1989

    While IBM wants a strong, independent software product community, it would prefer that that community not offer alternative database management system (DBMS) products. IBM would like to be your (only) supplier for DBMS software.

    This goal is already a fact of life on the AS/400. Although on the PS/2 platform the data manager (OS/2 Extended Edition) is separately priced, IBM’s marketing goal is pretty clear from the name.

    In the 370 line IBM has been careful to not talk about a bundling of DB2. However, it is likely that as the next generation hardware series is delivered and IBM’s architecture develops over the 1990s that DB2 will evolve into a common subsystem with MVS, ultimately to be installed by most customers using large IBM mainframes.

    This does not necessarily mean that the market for alternative database managers on large mainframes will go away. In fact, over the 1990s the DBMS choice for most companies will become more tactical than strategic.

    Emergence of SQL as a standard database access language for all DBMS vendors will allow more portability of applications over different DBMSs. Most large shops then are likely to have several DBMS products in the 1990s with DB2 being one of them.

    by George Schussel

    Download PDF

  • WHY SOFTWARE IS IBM’s MOST IMPORTANT BUSINESS

    Source: SOFTWARE MAGAZINE Published: Jul 1989

    Analyst George Schussel actively followed IBM’s businesses for 30+ years. In this article he accurately portrays the importance of software to the technology giant. Some comments from the article follow:

    With 1988 software and services sales of approximately $20 billion, IBM is by far the largest software company in the world. IBM‘s dominance in software is actually greater than in hardware.

    Large, competing independent software vendors like Cullinet Software, Inc., Westwood, Mass., and Oracle Corp., Belmont, Calif., have sales of a few hundred million dollars per annum.

    Computer Associates International, Inc., Garden City, N.Y., the largest independent, has an annual revenue of about $1 billion.

    IBM needs to do well in software because software sales drive the sales of hardware, and the profit margins and growth prospects for software are greater than the hardware business Based on current projections, it seems likely that IBM‘s attention to its software businesses is only likely to increase (see Table 1).
    Download PDF

  • FROM COG TO KINGPIN

    Source: CIO Magazine Published: Mar 1989

    CIO Magazine ran a story on workers in information technology who moved from desk jobs into entrepreneurship. George Schussel’s story in founding DCI was covered in this story.

    Download PDF

  • A Perspective on the Application of Fourth Generation Languages in Computer Software Development

    Source: Perspective   Published: Feb 1988

    The NCR magazine Perspective published George Schussel’s thoughts on a new generation of application development languages in 1988. Some comments from that article follow:

    At present 4GLs can be identified into five different categories:

    Query and reporting packages – These are typically used in the data processing department and are not suitable for updating files, but are very suitable for retrieving information from existing data bases. An example is CCA’s Imagine.

    Programming-Oriented Languages – These are designed for someone who knows how to program and they serve as a replacement for COBOL. They are designed for data processing professionals and are used for building transaction and logic-intensive applications. Examples are Cognos’ PowerHouse or Cortex Corporation’s Application Factory.

    Information Center 4GLs – These are human-like languages and are designed for non-data processing people. They are very effective for building personal use systems and smaller applications, but they have serious efficiency problems if used for large transaction-based systems. An example is Information Builder’s FOCUS.

    COBOL System Generators –These generate COBOL but do not generate direct object code that can be executed on a computer. They have certain advantages because their COBOL code can be compiled and moved to any object computer. Examples are CGI‘s PACBASE or Pansophic’s TELON.

    Decision support systems – These range from simple spreadsheets to complex multi-dimensional array based data base management systems which have simulation languages built into them and allow modeling and “what if” simulations. An example is Execucom’s IFPS.

    All five are fourth generation languages, but they are all different in the functions that they are used to perform.

    Download PDF

  • Application Development in the 5th Generation

    Source: Datamation   Published: Nov 1987

    In this landmark 1987 article, George Schussel forecasts the future of software technologies over the coming decade of the 90’s. The article accurately forecast the emergence of technologies such as 3-tiered computing architectures, computer aided software engineering (CASE), repositories, distributed databases, SQL standards and other important software futures. Schussel talked about Teradata (now NCR/Teradata), Oracle, IBM, Sybase and many other software pioneers.

    Download PDF

  • The Market for DBMS Software

    Source: Banking Software Review Published: Sep 1987

    In the late 1980’s the market for DBMS software was accelerating as businesses discovered that DBMS software was the key enabling technology for on-line and integrated systems. George Schussel discusses some of the different products competing in the marketplace for banking software. Some comments from the article follow:

    Integrated Development Software

    This is “major league” software – the guts of the data processing (DP) department of the 80’s. It usually consists of a data base management system (DBMS), an integrated, active data dictionary, query language, transaction processing monitor (often CICS), report writer, micro/mainframe link and interfaces to various other packages, including applications. A key portion of the integrated software package is the data dictionary/directory which controls data definitions and therefore is primarily important in coupling the different software pieces.

    Download PDF

  • Random Thoughts from Dr. George Schussel

    Source: Canadian Datasystems Published: Sep 1987

    A broad ranging discussion with George Schussel on the general state of computing in the late 1980’s was published in Canada through this magazine. One interesting quote from the article follows:

    Q: You predict that by 1992, most of the computing in the United States will occur on small computers, leaving the mainframes to handle data storage. What does that mean to the centralized MIS function?

    A: It means that MIS must be totally different than it was 10 years ago. We’re talking revolution, not evolution. The management of DP, in order to manage the transition from a third generation to a fourth generation environment, has to redefine its job. In some cases, as we have seen, it means that people will get rid of a 4300 and replace it with eight PC ATs and lay off five programmer/analysts. That may be an extreme but not necessarily an unreasonable example of how to cope. The old way was a centralized DP function doing all the work. The new way is the centralized DP function creating and managing databases and, like a utility, providing access to those databases. The new way is the centralized DP department providing standards, advice and supervision.

    Download PDF

  • Shopping for a Fourth Generation Language

    Source: Datamation   Published: Nov 1986

    Application developers are always searching for better higher level software that will improve the development of software. It has always been that way in the computer business, and we don’t see anything on the horizon to change this principle of the computing. After assembler languages came the compiler languages of the 70’s, like COBOL and FORTRAN. And, after these a whole host of languages that came to be known as 4GL appeared. This early article by George Schussel was an evaluation and guide to the field of 4GL’s.

    The article starts off by stating:

    Fourth generation languages are like a good set of tools-when you need them they come in handy. Today’s 4GL software provides significantly different solutions for improving productivity compared with third generation COBOL and FORTRAN. The semantic and syntactical differences among 4GLs are also far greater than the differences between FORTRAN and COBOL. And although 4GLs are similar in some ways most interface to one or more database management systems, provide query facilities and report writers, and support screen painting and some kind of procedural language-proprietary versions vary greatly in the types of applications and environments they are best suited for.

    Download PDF

  • A Taxonomy for Productivity Software

    Source: Data Base Newsletter   Published: Sep 1986

    In this 1986 article George Schussel lays out a categorization scheme for application development software and describes the differences between the different types of development software. Some of the software types discussed included integrated development workbenches, COBOL system generators, 4GL’s, information center software and decision support software. Most of the popular mid-80’s tools are discussed. Products like Cincom’s Supra, Cullinet’s IDMS/R, Software AG’s ADABAS/NATURAL, Information Builder’s FOCUS and many more are discussed.

    Download PDF

  • DB&4GL: 5 QUESTIONS & ANSWERS

    Source: Canadian Datasystems Published: Nov 1985

    Early in the era of database management technology, analyst George Schussel answers questions about the direction of the database field and application development technology. The questions posed were:

    1. Using a modern relational DBMS and 4GL is supposed to improve applications development productivity. For most – companies -will this have an impact on decisions to purchase application software?
    2. Some vendors recommend one DBMS/4GL for operational systems and another different DBMSI4GL for information centre/DSS applications, while other vendors say that a single system is the best approach. Your opinion?
    3. Some vendors have described their DBMS as relational when they’re hierarchical or network with some limited tabular extensions, Do these “Born Again” systems deserve to be called relational?
    4. Are the differences between inverted list DBMSs such as System 1032, Model 204 and ADABAS and truly relational DBMS such as Oracle, INGRES, and DB2 important?
    5. Within the next five years, will artificial intelligence find its way into 4th generation languages?

    Download PDF

  • Getting Connected with DCIs George Schussel

    Source: Breaktime Published: Dec 1969
    Local technology newspaper, Breaktime, interviews Schussel on subjects of personal interest.

    Download PDF