Virtual Reality and Computer Simulation

Virtual Reality and Computer Simulation

Introduction

 

Virtual reality and computer simulation have not received much attention from ethicists. It is argued in this essay that this relative neglect is unjustified, and that there are important ethical questions that can be raised in relation to these technologies. First of all, these technologies raise important ethical questions about the way in which they represent reality and the misrepresentations, biased representations and offensive representations that they may contain. In addition, actions in virtual environments can be harmful to others, and can also be morally problematic from the point of view of deontological and virtue ethics. Although immersive virtual reality systems are not yet used on a large scale, nonimmersive virtual reality is regularly experienced by hundreds of millions of users, in the form of computer games and virtual environments for exploration and social networking. These forms of virtual reality also raise ethical questions regarding their benefits and harms to users and society, and the values and biases contained in them. This paper has the following structure. The next section will describe what virtual reality and computer simulations are and what the current applications of these technologies are. This is followed by a section that analyzes the relation between virtuality and reality, and asks whether virtuality can and should function as a substitute for reality. Three subsequent sections discuss ethical aspects of representation in virtual reality and computer simulations, the ethics of behavior in virtual reality and the ethics of computer games. A concluding section discusses issues of professional ethics in the development and professional use of virtual reality systems and computer simulations. Background: The Technology and Its Applications Virtual reality Virtual reality (VR) technology emerged in the 1980s, with the development and marketing of systems consisting of a head mounted display (HMD) and datasuit or dataglove attached to a computer. These technologies simulated three-dimensional (3-D) environments displayed in surround stereoscopic vision on the head mounted display. The user could navigate and interact with simulated environments through the datasuit and dataglove, items that tracked the positions and motions of body parts and allowed the computer to modify its output depending on the recorded positions. This original technology has helped define what is often meant by “virtual reality”: an immersive, interactive three-dimensional computer-generated environment in which interaction takes place over multiple sensory channels and includes tactile and positioning feedback. According to Sherman and Craig (2003), there are four essential elements in virtual reality: a virtual world, immersion, sensory feedback, and interactivity. A virtual world is a description of a collection of objects in a space and rules and relationships governing these objects. In virtual reality systems, such virtual worlds are generated by a computer. Immersion is the sensation of being present in an environment, rather than just observing an environment from the outside. Sensory feedback is the selective provision of sensory data about the environment based on user input. The actions and position of the user provide a perspective on reality and determines what sensory feedback is given. Interactivity, finally, is the responsiveness of the virtual world to user actions. Interactivity includes the ability to navigate virtual worlds and to interact with objects, characters and places. These four elements can be realized to a greater and lesser degree with a computer, and that is why there are both broad and narrow definitions of virtual reality. A narrow definition would only define fully immersive and fully interactive virtual environments as VR. However, there are many virtual environments that do not meet all these criteria to the fullest extent possible, but that could still be categorized as VR. Computer games played on a desktop with a keyboard and mouse, like Doom and HalfLife, are not fully immersive, and sensory feedback and interactivity in them is more limited than in immersive VR systems that include a head mounted display and datasuit. Yet, they do present virtual worlds that are immersive to an extent, and that are interactive and involve visual and auditory feedback. In Brey (1999) I therefore proposed a broader definition of virtual reality, as a three-dimensional interactive computergenerated environment that incorporates a first-person perspective. This definition includes both immersive and nonimmersive (screen-based) forms of VR. The notion of a virtual world, or virtual environment, as defined by Sherman and Craig, is broader than that of virtual reality. A virtual world can be realized by means of sensory feedback, in which case it yields virtual reality, but it can also be realized without it. Classical text-based adventure games like Zork, for example, play in interactive virtual worlds, but users are informed about the state of this world through text. They provide textual inputs, and the game responds with textual information rather than sensory feedback about changes in the world. A virtual world is hence an interactive computer-generated environment, and virtual reality is a special type of virtual world that involves location- and movement-relative sensory feedback. Next to the term “virtual reality,” there is the term “virtuality” and its derivative adjective “virtual”. This term has a much broader meaning than the term “virtual reality” or even “virtual environment”. As explained more extensively in the next section, the term “virtual” refers to anything that is created or carried by a computer and that mimics a “real”, physically localized entity, as in “virtual memory” and “virtual organization”. In this essay, the focus will be on virtual reality and virtual environments, but occasionally, especially in the next section, the broader phenomenon of virtuality will be discussed as well. Returning to the topic of virtual reality, a distinction can be made between singleuser and multi-user or networked VR. In single-user VR, there is only one user, whereas in networked VR, there are multiple users who share a virtual environment and appear to each others as avatars, which are graphical representations of the characters played by users in VR. A special type of VR is augmented reality, in which aspects of simulated virtual worlds are blended with the real world that is experienced through normal vision or a video link, usually through transparent glasses on which computer graphics or data are overlaid. Related to VR, furthermore, are telepresence and teleoperator systems, systems that extend a person’s sensing and manipulation capability to a remote location by displaying images and transmitting sounds from a real environment that can (optionally) be acted on from a distance through remote handling systems such as robotic arms. Computer simulation A computer simulation is a computer program that contains a model of a particular system (either actual or theoretical) and that can be executed, after which the execution output can be analyzed. Computer simulation is also the name of the discipline in which such models are designed, executed and analyzed. The models in computer simulations are usually abstract and either are or involve mathematical models. Computer simulation has become a useful part of the mathematical modeling of many natural systems in the natural sciences, human systems in the social sciences, and technological systems in the engineering sciences, in order to gain insight into the operations of these systems and to study the effects of alternative conditions and courses of action. It is not usually an aim in computer simulations, as it is in virtual reality, to do realistic visual modeling of the systems that they simulate. Some of these systems are abstract, and even for those systems that are concrete, the choice is often made not to design graphic representations of the system, but to rely solely on abstract models of it. When graphical representations of concrete systems are used, they usually only represent features that are relevant to the aims of the simulation, and do not aspire to the realism and detail aspired to in virtual reality. Another difference with virtual reality is that computer simulations need not be interactive. Usually, simulators will determine a number of parameters at the beginning of a simulation and then “run” the simulation without any interventions. In this standard case, the simulator is not himself defined as part of the simulation, as would happen in virtual reality. An exception is an interactive simulation, which is a special kind of simulation, also referred to as a human-in-the-loop simulation, in which the simulation includes a human operator. An example of such a simulation would be a flight simulator. If a computer simulation is interactive and makes use of three-dimensional graphics and sensory feedback, it also qualifies as a form of virtual reality. Sometimes, also, the term “computer simulation” is used to include any computer program that models a system or environment, even if it is not used to gain insight into the operation of a system. In that broad sense, virtual environments, at least those that aim to do realistic modeling, would also qualify as computer simulations. Applications VR is used to simulate both real and imaginary environments. Traditional VR applications are found in medicine, education, arts and entertainment, and the military (Burdea and Coiffet, 2003). In medicine, VR is used for the simulation of anatomical structures and medical procedures in education and training, for example for performing virtual surgery. Increasingly, VR is also being used for (psycho)therapy, for instance for overcoming anxiety disorders by confronting patients with virtual anxiety-provoking situations (Wiederhold and Wiederhold, 2004). In education, VR is used in explorationbased learning and learning by building virtual worlds. In the arts, VR is used to create new art forms and to make the experience of existing art more dynamic and immersive. In entertainment, mostly nonimmersive, screen-based forms of VR are used in computer and video games and arcades. This is a form of VR that many people experience on a regular basis. In the military, finally, VR is used in a variety of training contexts for army, navy and air force. Emerging applications of VR are found in manufacturing, architecture, and training in a variety of (dangerous) civilian professions. Computer simulations are used in the natural and social sciences to gain insight into the functioning of natural and social systems, and in the engineering sciences for performance optimization, safety engineering, training and education. They are used on a large scale in the natural and engineering sciences, where such fields have sprung up as computational physics, computational neuroscience, computational fluid mechanics, computational meteorology and artificial life. They are also used on a somewhat more modest scale in the social sciences, for example in the computational modeling of cognitive processes in psychology, in the computational modeling of artificial societies and social processes, in computational economic modeling, and in strategic management and organizational studies. Computer simulations are increasingly used in education and training, to familiarize students with the workings of systems and to teach them to interact successfully with such systems. Virtuality and Reality The Distinction between the Virtual and the Real In the computer era, the term “virtual” is often contrasted with “real”. Virtual things, it is often believed, are things that only have a simulated existence on a computer, and are therefore not real, like physical things. Take, for example, rocks and trees in a virtual reality environment. They may look like real rocks and trees, but we know that they have no mass, no weight, and no identifiable location in the physical world, and are just illusions generated through electrical processes in microprocessors and the resulting projection of images on a computer screen. “Virtual” hence means “imaginary”, “makebelieve”, “fake”, and contrasts with “real”, “actual” and “physical”. A virtual reality is therefore always only a make-believe reality, and can as such be used for entertainment or training, but it would be a big mistake, in this view, to call anything in virtual reality real, and to start treating it as such. This popular conception of the contrast between virtuality and reality can, however, be demonstrated to be incorrect. “Virtual” is not the perfect opposite of “real”, and some things can be virtual and real at the same time. To see how this is so, let us start by considering the semantics of “virtual”. The word “virtual” has two traditional, pre-computer meanings. On the first, most salient meaning, it refers to things that have certain qualities in essence or in effect, but not in name. For instance, if only a few buffalo are left, one can say that buffalo are virtually extinguished, extinguished for all practical purposes, even though they are not formally or actually extinguished. Virtual can also mean imaginary, and therefore not real, as in optics, where reference is made to virtual foci and images. Notice that only on the second, least salient meaning, “virtual” contrasts with “real”. On the more salient meaning, it does not mean “unreal” but rather “practically but not formally real”. In the computer era, the word “virtual” came to refer to things simulated by a computer, like virtual memory, which is memory that is not actually built into a processor, but nevertheless functioning as such. Later, the scope of the term “virtual” has expanded to include anything that is created or carried by a computer and that mimics a “real” equivalent, like a virtual library and a virtual group meeting. The computer-based meaning of “virtual” conforms more with the traditional meaning of “virtual” as “practically but not formally real” than with “unreal”. Virtual memory, for example, is not unreal memory, but rather a simulation of physical memory that can effectively function as real memory. Under the above definition of “virtual” as “created or carried by a computer and mimicking a “real” equivalent,” virtual things and processes are imitations of real things, but this need not also preclude them from being real themselves. A virtual game of chess, for example, is also a real game of chess. It is just not played with a physically realized board and pieces. I have argued (Brey, 2003) that a distinction can be made between two types of virtual entities: simulations and ontological reproductions. Simulations are virtual versions of real-world entities that have a perceptual or functional similarity to them, but that do not have the pragmatic worth or effects of the corresponding real-world equivalent. Ontological reproductions are computer imitations of real-world entities that have (nearly) the same value or pragmatic effects as their real world counterparts. They hence have a real-world significance that extends beyond the domain of the virtual environment and that is roughly equal to that of their physical counterpart. To appreciate this contrast, consider the difference between a virtual chess game and a virtual beer. A virtual beer is necessarily a mere simulation of a real beer: it may look much like a real one, and may be lifted and consumed in a virtual sense, but it does not provide the taste and nourishment of a real beer and will never get one drunk. A virtual chess game, in contrast, may lack the physical sensation of moving real chess pieces on a board, but this sensation is considered peripheral to the game, and in relevant other respects, playing virtual chess is equivalent to playing chess with physical pieces. This is not to say that the distinction between simulations and ontological reproductions is unproblematic; it is ultimately a pragmatic distinction, and a virtual entity will be classified as one or the other depending on whether it is judged to share enough of the essential features of its physical counterpart. In Brey (2003), I argued that two classes of physical objects and processes can be ontologically reproduced on computers. A first class consists of physical entities that are defined in terms of visual, auditory or computational properties that can be fully realized on multimedia computers. Such entities include images, movies, musical pieces, stereo systems and calculators, which are all such that a powerful computer can successfully reproduce their essential physical or formal properties. A second class consists of what John Searle (1995) has called institutional entities, which are entities that are defined by a status or function that has been assigned to them within a social institution or practice. Examples of institutional entities are activities like buying, selling, voting, owning, chatting, playing chess, trespassing and joining a club, and requisite objects like contracts, money, letters and chess pieces. Most institutional entities are not dependent on a physical medium, because they are only dependent on the collective assignment of a status or function. For instance, we call certain pieces of paper money not because of their inherent physical nature but because we collectively assign monetary value to them. But we could also decide, and have decided, to assign the same status to certain sequences of bits that float around on the Internet. In general, if an institutional entity exists physically, it can also exist virtually. Therefore, many of our institutions and institutional practices, whether social, cultural, religious or economic, can exist in virtual or electronic form. It can be concluded that many virtual entities can be just as real as their physical counterparts. Virtuality and reality are therefore not each others opposites. Nevertheless, a large part of ordinary reality, that includes most physical objects and processes, cannot be ontologically reproduced in virtual form. In addition, institutional virtual entities can both possess and lack real-world implications. Sometimes virtual money can also be used as real money, whereas at other times, it is only a simulation of real money. People can also disagree on the status of virtual money, with some accepting it as legal tender, and others distrusting it. The ontological distinction between reality and virtuality is for these reasons confusing, and the ontological status of encountered virtual objects will often be not immediately clear. Is the Distinction Disappearing? Some authors have argued that the emergence of computer-generated realities is working to erase the distinction between simulation and reality, and therefore between truth and fiction. Jean Baudrillard (1995), for example, has claimed that information technology, media, and cybernetics have yielded a transition from an era of industrial production to an era of simulation, in which models, signs and codes mediate access to reality and define reality to the extent that it is no longer possible to make any sensible distinction between simulations and reality, so that the distinction between reality and simulation has effectively collapsed. Similarly, Albert Borgmann (1999) has argued that virtual reality and cyberspace have lead many people to confuse them for alternative realities that have the same actuality of the real world, thus leading to a collapse of the distinction between representation and reality, whereas according to him VR and cyberspace are merely forms of information and should be treated as such. Philip Zhai (1998), finally, has argued that there is no principled distinction between actual reality and virtual reality, and that with further technological improvements in VR, including the addition of functional teleoperation, virtual reality could be made totally equivalent to actual reality in its functionality for human life. Effectively, Zhai is arguing that any real-world entity can be ontologically reproduced in VR given the right technology, and that virtual environments are becoming ontologically more like real environments as technology progresses. Are these authors right that the distinction between virtuality and reality, and between simulation and reality, is disappearing? First, it is probably true that there is increasingly less difference between the virtual and the real. This is because, as has already been argued, many things are virtual and real at the same time. Moreover, the number of things that are both virtual and real seem to be increasing. This is because as the possibilities of computers and computer networks increase, more and more physical and institutional entities are reproduced in virtual form. There is a flight to the digital realm, in which many believe it is easier and more fun to buy and sell, listen to music or look at art, or do your banking. For many people, therefore, an increasingly large part of their real lives is also virtual, and an increasingly large part of the virtual is also real. Even if virtuality and reality are not opposite concepts, simulation and reality, and representation and reality, certainly are. Are these two distinctions disappearing as well? Suggesting that they at least become more problematic is the fact that more and more of our knowledge of the real world is mediated by representations and simulations, whether they are models in science, raw footage and enactments in broadcast news, and stories and figures in newspapers or on the Internet. Often, it is not possible, in practice or in principle, to verify the truth or accuracy of these representations through direct inspection of the corresponding state-of-affairs. Therefore, one might argue that these representations become reality for us, for they are all the reality we know. In addition, the distinction between recordings and simulations is becoming more difficult to make. Computer technology has made it easy to manipulate photos, video footage and sound recordings, and to generate realistic imagery, so that it is nowadays often unclear of photographic images or video footage on the Internet or in the mass media whether they are authentic or fabricated or enacted. The trend in mass media towards “edutainment” and the enactment and staging of news events has further problematized the distinction. Yet, all this does not prove that the distinction between simulation/representation and reality has collapsed. People do not get all of their information from media representations. They also move around and observe the world for themselves. People still question and critically investigate whether representations are authentic or correspond to reality. People hence still maintain an ontological distinction, even though it has become more difficult epistemologically to discern whether things and events are real or simulated. Zhai’s suggestion that the distinction could be completely erased through further perfection of virtual reality technology is unlikely to hold because it is unlikely that virtual reality could ever fully emulate actual reality in its functionality for human life. Virtual reality environments cannot, after all, sustain real biological processes, and therefore they can never substitute for the complete physical world. Evaluating the Virtual as a Substitute for the Real Next to the ontological and epistemological questions regarding distinction between the virtual and the real and how we can know this distinction, there is the normative question of how we should evaluate virtuality as a substitute for reality. First of all, are virtual things better or worse, more or less valuable, than their physical counterparts? Some authors have argued that they are in some ways better: they tend to be more beautiful, shiny and clean, and more controllable, predictable, and timeless. They attain, as Michael Heim (1983) has argued, a supervivid hyper-reality, like the ideal forms of platonism, more perfect and permanent than the everyday physical world, answering to our desire to transcend our mortal bodies and reach a state of permanence and perfection. Virtual reality, it may seem, can help us live lives that are more perfect, more stimulating and more in accordance with our fantasies and dreams. Critics of virtuality have argued that the shiny, polished objects of VR are mere surrogates: simplified and inferior substitutes for reality that lack authenticity. Albert Borgmann (1999), for example, has argued that virtuality is an inadequate a substitute for reality, because of its fundamental ambiguity and fragility, and lacks the engagement and splendor of reality. He also argues that virtuality threatens to alter our perspective on reality, causing us to see it as yet another sign or simulation. Hubert Dreyfus (2001) has argued that presence in VR and cyberspace gives a disembodied and therefore false experience of reality and that even immersive VR and telepresence present one with impoverished experiences. Another criticism of the virtual as a substitute for the real is that investments in virtual environments tend to correlate with disinvestments in people and activities in real life (Brey, 1998). Even if this were to be no loss to the person making the disinvestments, it may well be a loss to others affected by it. If a person takes great effort in caring for virtual characters, he or she may have less time left to give similar care and emotional attention to actual persons and animals, or may be less interested in giving it. In this way, investments in VR could lead to a neglect of real life and therefore a more solitary society. On the other hand, virtual environments can also be used to vent aggression, harming only virtual characters and property and possibly preventing similar actions in real life. Representation and Simulation: Ethical Issues VR and computer simulations are representational media: they represent real or fictional objects and events. They do so by means of different types of representations: pictorial images, sounds, words and symbols. In this section, ethical aspects of such representations will be investigated. It will be investigated whether representations are morally neutral and whether their manufacture and use in VR and computer simulations involves ethical choices. Misrepresentations, Biased Representations and Indecent Representations I will argue that representations in VR or computer simulations can become morally problematic for any of three reasons. First, they may cause harm by failing to uphold standards of accuracy. That is, they may misrepresent reality. Such representations will be called misrepresentations. Second, they may fail to uphold standards of fairness, thereby unfairly disadvantaging certain individuals or groups. Such representations will be called biased representations. Third, they may violate standards of decency and public morality. I will call such representations indecent representations. Misrepresentation in VR and computer simulation occurs when it is part of the aim of a simulation to realistically depict aspects of the real world, yet the simulation fails to accurately depict these features (Brey, 1999). Many simulations aim to faithfully depict existing structures, persons, state-of-affairs, processes or events. For example, VR applications have been developed that simulate in great detail the visual features of existing buildings such as the Louvre or Taj Mahal or the behavior of existing automobiles or airplanes. Other simulations do not aim to represent particular existing structures, but nevertheless aim to be realistic in their portrayal of people, things and events. For example, a VR simulation of military combat will often be intended to contain realistic portrayals of people, weaponry and landscapes without intending to represent particular individuals or a particular landscape. When simulations aim to be realistic, they are expected to live up to certain standards of accuracy. These are standards that define the degree of freedom that exist in the depiction of a phenomenon, and that specify what kinds of features must be included in a representation for it to be accurate, what level of detail is required, and what kinds of idealizations are permitted. Standards of accuracy are fixed in part by the aim of a simulation. For example, a simulation of surgery room procedures should be highly accurate if it is used for medical training, somewhat accurate when sold as edutainment, and need not be accurate at all when part of a casual game. Standard of accuracy can also be fixed by promises or claims made by manufacturers. For example, if a game promises that surgery room procedures in it are completely realistic, the standards of accuracy for the simulation of these procedures will be high. People may also disagree about the standards of accuracy that are appropriate for a particular simulation. For example, a VR simulation of military combat that does not represent killings in graphic detail may be discounted as inaccurate and misleading by anti-war activists, but may be judged to be sufficiently realistic for the military for training purposes. Misrepresentations of reality in VR and computer simulations are morally problematic to the extent that they can result in harm. The greater these harms are, and the greater the chance that they occur, the greater the moral responsibility of designers and manufacturers to ensure accuracy of representations. Obviously, inaccuracies in VR simulations of surgical procedures for medical training or computer simulations to test the bearing power of bridges can lead to grave consequences. A misrepresentation of the workings of an engine in educational software causes a lesser or less straightforward harm: it causes students to have false beliefs, some of which could cause harms at a later point in time. Biased representations constitute a second category of morally problematic representations in VR modeling and computer simulation (Brey, 1999). A biased representation is a representation that unfairly disadvantages certain individuals or groups or that unjustifiably promotes certain values or interests over others. A representation can be biased in the way it idealizes or selectively represents phenomena. For example, a simulation of global warming may be accurate overall but unjustifiably ignore the contribution to global warming made by certain types of industries or countries. Representations can also be biased by stereotyping people, things and events. For example, a computer game may contain racial or gender stereotypes in its depiction of people and their behaviors. Representations can moreover be biased by containing implicit assumptions about the user, as in a computer game that plays out male heterosexual fantasies, thereby assuming that players will generally be male and heterosexual. They can also be biased by representing affordances and interactive properties in objects that make them supportive of certain values and uses but not of others. For example, a gun in a game may be designed so that it can used to kill but not to knock someone unconscious. Indecent representations constitute a third and final category of morally problematic representations. Indecent representations are representations that are considered shocking or offensive or that are held to break established rules of good behavior or morality and that are somehow shocking to the senses or moral sensibilities. Decency standards vary widely across different individuals and cultures however, and what is shocking or immoral to some will not be so to others. Some will find any depiction of nudity, violence or physical deformities indecent, whereas others will find any such depiction acceptable. The depiction of particular acts, persons or objects may be considered blasphemous in certain religions but not outside these religions. For this reason, the notion of an indecent representation is a relative notion, and there will usually be disagreement about what representations count as indecent. In addition, the context in which representation take place may also influence whether it is considered decent. For example, the representation of open heart surgery, with some patients surviving the procedure but others dying on the operation table, may be inoffensive in the context of a medical simulator, but offensive in the context of a game that makes light of such a procedure. Virtual Child Pornography Pornographic images and movies are considered indecent by many, but there is a fairly large consensus that people have a right to produce pornography and use it in private. Such a consensus does not consist for certain extreme forms of pornography, including child pornography. Child pornography is considered wrong because it harms the children that are used to produce it. But what about virtual child pornography? Virtual child pornography is the digital creation of images or animated pictures that depict children engaging in sexual activities or that depict them in a sexual way. Nowadays, such images and movies can be made to be highly realistic. No real children are abused in this process, and therefore the major reason for outlawing child pornography does not apply for it. Does this mean that virtual child porn is morally permissible and that its production and consumption should be legal? The permissibility of virtual child porn has been defended on the argument that no actual harm is done to children and that that people have a right to free speech by which they should be permitted to produce and own virtual child pornography, even if others find such images offensive. Indeed, the U.S. Supreme Court struck down a congressional ban on virtual child porn in 2002 with the argument that this ban constituted too great a restriction on free speech. The court also claimed that no proof had been given of a connection between computer-generated child pornography and the exploitation of actual children. An additional argument that is sometimes used in favor of virtual child porn is that its availability to pedophiles may actually decrease the chances that they will harm children. Opponents of virtual child porn have sometimes responded with deontological arguments, claiming that it is degrading to children and undermines human dignity. Such arguments cut little ice, however, in a legal arena that is focused on individual rights and harms. Since virtual child porn does not seem to violate the individual rights, opponents have tried out various arguments to the effect that it does cause harm. One existing argument is that virtual child porn causes indirect harm to children because it encourages child abuse. This argument goes opposite the previously stated argument that virtual child porn should be condoned because it makes child abuse less likely. The problem is that it is very difficult to conduct studies that provide solid empirical evidence for either position. Another argument is that failing to criminalize virtual child porn will harm children because it makes it difficult to enforce laws that prohibit actual child pornography. This argument has been used often by law enforcers to criminalize virtual child porn. As Neil Levy (2002) has argued, this argument is however not plausible, amongst other reasons because experts are usually able to make the distinction between virtual and actual pictures. Levy’s own argument against virtual child porn is not that it will indirectly harm children, but that it may ultimately harm women by eroticizing inequality in sexual relationships. He admits, however, that he lacks the empirical evidence to back up this claim. Per Sandin (2004) has presented an argument with better empirical support, which is that virtual child porn should be outlawed because it causes significant harm to a great many people who are revulsed by it. The problem with this argument, however, is that it gives too much weight to harm caused by offense. If actions should be outlawed whenever they offend a large group of people, then individual rights would be drastically curtailed, and many things, ranging from homosexual behavior to interracial marriage, would still be illegal. It can be concluded that virtual child pornography will remain a morally controversial issue for some time to come, as no decisive arguments for or against it have been provided so far. Depiction of Real Persons Virtual environments and computer simulations increasingly include characters that are modeled after the likeness of real persons, whether living or deceased. Also, films and photographs increasingly include manipulated or computer-generated images of real persons who are placed in fictional scenes or are made to perform behaviors that they have not performed in real life. Such appropriations of likenesses are often made without the person’s consent. Is such consent morally required, or should the depictions of real persons be seen as an expression of artistic freedom or free speech? Against arguments for free speech, three legal and moral arguments have traditionally been given for restrictions on the use of someone’s likeness (Tabach-Bank, 2004). First, the right to privacy has been appealed to. It has been argued that the right to privacy includes a right to live a life free from unwarranted publicity (Prosser, 1960). The public use of someone’s likeness can violate someone’s privacy by intruding upon his seclusion or solitude or into his private affairs, by working to publicly disclose embarrassing private facts about him, or to place him in a false light in the public eye. A second argument for restricting the use of someone’s likeness is that it can be used for defamation. Depicting someone in a certain way, for example as being involved in immoral behavior or in a ridiculous situation, can defame him by harming his public reputation. In some countries, like the U.S., there is also a separate recognized right of publicity. The right to publicity is an individual's right to control and profit from the commercial use of his name, likeness and persona. The right to publicity has emerged as a protection of the commercial value of the identity of public personalities, or celebrities, who frequently use their identity to sell or endorse products or services. It is often agreed that celebrities have less of an expectation of privacy, because they are public personalities, but have a greater expectation of a right to publicity. In the use of the likenesses of real persons in virtual environments or doctored digital images, rights to free speech, freedom of the press and freedom of artistic expression will therefore have to be balanced against the right to privacy, the right of publicity and the right to protection from defamation. Behavior in Virtual Environments: Ethical Issues The preceding section focused on ethical issues in design and embedded values in VR and computer simulations. This section focuses on ethical issues in the use of VR and interactive computer simulations. Specifically, the focus will be on the question whether actions within the worlds generated by these technologies can be unethical. This issue will be analyzed for both single-user and multi-user systems. Before it will be taken up, it will first be considered how actions in virtual environments take place, and what the relation is between users and the characters as which they appear in virtual environments. Avatars, Agency and Identity In virtual environments, users assume control over a graphically realized character called an avatar. Avatars can be built after the likeness of the user, but more often, they are generic persons or fantasy characters. Avatars can be controlled from a first-person perspective, in which the user sees the world through the avatar’s eyes, or from a thirdperson perspective. In multi-user virtual environments, there will be multiple avatars corresponding to different users. Virtual environments also frequently contain bots, which are programmed or scripted characters that behave autonomously and are controlled by no one. The identity that users assume in a virtual environment is a combination of the features of the avatar they choose, the behaviors that they choose to display with it, and the way others respond to the avatar and its behaviors. Avatars can function as a manifestation of the user, who behaves and acts like himself, and to whom others respond as if it is the user himself, or as a character that has no direct relation to the user and that merely plays out a role. The actions performed by avatars can therefore range from authentic expressions of the personality and identity of the user to experimentation with identities that are the opposite of who the user normally is. Whether or not the actions of an avatar correspond with how a user would respond in real life, there is no question that the user is causally and morally responsible for actions performed by his or her avatar. This is because users normally have full control over the behavior of their avatars through one or more input devices. There are occasional exceptions to this rule, because avatars are sometimes taken over by the computer and then behave as bots. The responsibility for the behavior of bots could be assigned to either their programmer or to whomever introduced them into a particular environment, or even to the programmer of the environment for not disallowing harmful actions by bots (Ford, 2001). Behavior in Single-User VR Single-user VR offers much less possibilities for unethical behavior than multi-user VR, because there are no other human beings that could be directly affected by the behavior of a user. The question is if there are any behaviors in single-user VR that could qualify as unethical. In Brey (1999), I considered the possibility that certain actions that are unethical when performed in real life could also be unethical when performed in singleuser VR. My focus was particularly on violent and degrading behavior towards virtual human characters, such as murder, torture and rape. I considered two arguments for this position, the argument from moral development and the argument from psychological harm. According to the argument from moral development, it is wrong to treat virtual humans cruelly because doing so will make it more likely that we will treat real humans cruelly. The reason for this is that the emotions appealed to in the treatment of virtual humans are the same emotions that are appealed to in the treatment of real humans, because these actions resemble each other so closely. This argument has recently gained empirical support (Slater et al., 2006). The argument from psychological harm is that third parties may be harmed by the knowledge or observation that people engage in violent, degrading or offensive behavior in single-user VR and that therefore this behavior is immoral. This argument is similar to the argument attributed to Sandin in my earlier discussion of indecent representations. I claimed in Brey (1999) that although harm may be caused by particular actions in single-user VR because people may be offended by them, it does not necessarily follow that the actions are immoral, but only that they cause indirect harm to some people. One would have to balance such harms against any benefits, such as pleasurable experiences to the user. Matt McCormick (2001) has offered yet another argument according to which violent and degrading behavior in single-user VR can be construed as unethical. He argues that repeated engagement in such behavior erodes one’s character and reinforces virtueless habits. He follows Aristotelian virtue ethics in arguing that this is bad because it makes it difficult for us to lead fulfilling lives, because as Aristotle has argued, a fulfilling life can only be lived by those who are of virtuous character. More generally, the argument can be made that the excessive use of single-user VR keeps one from leading a good life, even if one’s actions in it are virtuous, because one invests into fictional worlds and fictional experiences that seem to fulfill one’s desires but do not actually do so (Brey, forthcoming). Behavior in Multi-User VR Many unethical behaviors between persons in the real world can also occur in multi-user virtual environments. As discussed earlier in the section on reality and virtuality, there are two classes of real-world phenomena that can also exist in virtual form: institutional entities that derive their status from collective agreements, like money, marriage, and conversations, and certain physical and formal entities, like images and musical pieces, that computers are capable of physically realizing. Consequently, unethical behaviors involving such entities can also occur in VR, and it is possible for there to be real thefts, insults, deceptions, invasions of privacy, breaches of contract, or damage to property in virtual environments. Immoral behaviors that cannot really happen in virtual environments are those that are necessarily defined over physically realized entities. For example, there can be real insult in virtual environments, but not real murders, because real murders are defined over persons in the physical world, and the medium of VR does not equip users with the power to kill persons in the physical world. It may, of course, be possible to kill avatars in VR, but these are of course not killings of real persons. It may also be possible to plan a real murder in VR, for example by using VR to meet up with a hitman, but this cannot then be followed up by the execution of a real murder in VR. Even though virtual environments can be the site of real events with real consequences, they are often recognized as fictional worlds in which character merely play out roles. In such cases, even an insult may not be a real insult, in the sense of an insult made by a real person to another real person, because it may only have the status of an insult between two virtual characters. The insult is then only real in the context of the virtual world, but is not real in the real world. Ambiguities arise, however, because it will not always be clear when actions and events in virtual environments should be seen as fictional or real (Turkle, 1995). Users may assign different statuses to objects and events, and some users may identify closely with their avatar, so that anything that happens to their avatar also happens to them, whereas others may see their avatar as an object detached from themselves with which they do not identify closely. For this reason, some users may feel insulted when their avatar is insulted, whereas others will not feel insulted at all. This ambiguity in the status of many actions and events in virtual worlds can lead to moral confusion as to when an act that takes place in VR is genuinely unethical and when it merely resembles a certain unethical act. The most famous case of this is the case of the “rape in cyberspace” reported by Julian Dibbell (1993). Dibbell reported an instance of a “cyberrape” in LambdaMOO, a text-only virtual environment in which users interact with user-programmable avatars. One user used a subprogram that took control of avatars and made them perform sex acts on each other. Users felt their characters were raped, and some felt that they themselves were indirectly raped or violated as well. But is it ever possible for someone to be raped through a rape of her avatar, or does rape require a direct violation of someone’s body? Similar ambiguities exist for many other immoral practices in virtual environments, like adultery and theft. If it would constitute adultery when two persons were to have sex with each other, does it also constitute adultery when their avatars have sex? When a user steals virtual money or property from other users, should he be considered a thief in real life? Virtual Property and Virtual Economies For any object or structure found in a virtual world, one may ask the question: Who owns it? This question is already ambiguous, however, because there may both be virtual and real-life owners of virtual entities. For example, a user may be the owner of an island in a virtual world, but the whole world, including the island, may be owned by the company that has created it and permits users to act out roles in it. Users may also become creators of virtual objects, structures and scripted events, and some put in hundreds of hours of work into their creations. May they therefore also assert intellectual property rights to their creations? Or can the company that owns the world in which the objects are found and the software with which they were created assert ownership? What kind of framework of rights and duties should be applied to virtual property? (Burk, 2005). The question of property rights in virtual worlds is further complicated by the emergence of so-called virtual economies. Virtual economies are economies that exist within the context of a persistent multi-user virtual world. Such economies have emerged in virtual worlds like Second Life and The Sims Online, and in massively multiplayer online role-playing games (MMORPGs) like Entropia Universe, World of Warcraft, Everquest and EVE Online. Many of these worlds have millions of users. Economies can emerge in virtual worlds if there are scarce goods and services in them for which users are willing to spend time, effort or money, if users can also develop specialized skills to produce such goods and services, if users are able to assert property rights on goods and resources, and if they can transfer goods and services between them. Some economies in these worlds are primitive barter economies, whereas other make use of recognized currencies. Second Life, for example, makes use of the Linden Dollar (L$) and Entropia Universe has the Project Entropia Dollar (PED), both of which have an exchange rate against real U.S. dollars. Users of these worlds can hence choose to acquire such virtual money by doing work in the virtual world (e.g., by selling services or opening a virtual shop) or by making money in the real world and exchanging it with virtual money. Virtual objects are now frequently traded for real money outside the virtual worlds that contain them, on online trading and auction sites like eBay. Some worlds also allow for the trade of land. In December 2006, the average price of a square meter of land in Second Life was L$ 9.68 or U.S. $ 0.014 (up from L$ 6.67 in November), and over 36,000,000 square meters were sold1 Users have been known to pay thousands of dollars for cherished virtual objects, and over $ 100,000 for real estate. The emergence of virtual economies in virtual environments raises the stakes for their users, and increases the likelihood that moral controversies ensue. People will naturally be more likely to act immorally if money is to be made or if valuable property is to be had. In one incident which took place in China, a man lent a precious sword to another man in the online game Legend of Mir 3, who then sold it to a third party. When the lender found out about this, he visited the borrower at his home and killed him.2 Cases have also been reported of Chinese sweatshop laborers who work day and night in conditions of practical slavery to collect resources in games like World of Warcraft and Lineage, which are then sold for real money. 1 Source: https://secondlife.com/whatis/economy_stats.php. Accessed 1/3/2007. 2 Online gamer killed for selling cyber sword. ABC NewsOnline, March 30, 2005. http://www.abc.net.au/news/newsitems/200503/s1334618.htm. There have also been reported cases of virtual prostitution, for instance on Second Life, where users are paid to (use their avatar to) perform sex acts or to serve as escorts. There have also been controversies over property rights. On Second Life, for example, controversy ensued when someone introduced a program called CopyBot that could copy any item in the world. This program wreaked havoc on the economy, undermining the livelihood of thousands of business owners in Second Life, and was eventually banned after mass protests.3 Clearly, then, the emergence of virtual economies and serious investments in virtual property generates many new ethical issues in virtual worlds. The more time, money and social capital people invest in virtual worlds, the more such ethical issues will come to the front. The Ethics of Computer Games Contemporary computer and video games often play out in virtual environments or include computer simulations, as defined earlier. Computer games are nowadays mass media. A recent study shows that the average American 8- to 18-year old spends almost six hours per week playing computer games, and that 83% has access to a video game console at home (Rideout, Roberts and Foehr, 2005). Adults are also players, with four in ten playing computer games on a regular basis.4 In 2005, the revenue in the U.S. generated by the computer and game industry generated was over U.S. $ 7 billion, far surpassing the film industry’s annual box office results.5 Computer games have had a vast impact on youth culture, but also significantly influence the lives of adults. For these reasons alone, an evaluation of their social and ethical aspects is needed. Some important issues bearing on the ethics of computer games have already been discussed in previous sections, and therefore will be covered less extensively here. These include, amongst others, ethical issues regarding biased and indecent representations; issues of responsibility and identity in the relation between avatars, users and bots; the ethics of behavior in virtual environments; and moral issues regarding virtual property and virtual economies. These issues, and the conclusions reached regarding them, all fully apply to computer games. The focus in this section will be on three important ethical questions that apply to computer games specifically: Do computer games contribute to individual well-being and the social good? What values should govern the design and use of computer games? And do computer games contribute to gender inequality? The Goods and Ills of Computer Games Are computer games a benefit to society? Many parents do not think so. They worry about the extraordinary amount of time their children spend playing computer games, and 3 Linden bans CopyBot following resident protests. Reuters News, Wednesday November 15, 2006. http://secondlife.reuters.com/stories/2006/11/15/linden-bans-copybot-following-resident-protests/ 4 Poll: 4 in 10 adults play electronic games. MSNBC.com, May 8, 2006. http://www.msnbc.msn.com/id/12686020/ 5 2006 Essential Facts about the Computer and Video Game Industry, Entertainment Software Association, 2006. http://www.theesa.com/archives/files/Essential%20Facts%202006.pdf about the excessive violence that takes place in many games. They worry about negative effects on family life, schoolwork and the social and moral development of their kids. In the media, there has been much negative reporting about computer games. There have been stories about computer game addiction and about players dying from exhaustion and starvation after playing video games for days on end. There have been stories about ultraviolent and otherwise controversial video games, and the ease by which children can gain access to them. The Columbine High School massacre, in 1999, in which two teenage students went out on a shooting rampage, was reported in the media to have been inspired by the video game Doom, and since then, other mass shootings have also been claimed to have been inspired by video games. Considerable doubt has been raised, therefore, as to whether computer games are indeed a benefit to society rather than a social ill. The case against computer games tends to center on three perceived negative consequences: addiction, aggression and maladjustment. The perceived problem of addiction is that many gamers get so caught up in playing that their health, work or study, family life, and social relations suffer. How large this problem really is has not yet been adequately documented (though see Chiu, Lee and Huang, 2004). There clearly is a widespread problem, as there has been a worldwide emergence of clinics for video addicts in recent years. Not all hard-core gamers will be genuine addicts in the psychiatric sense, but many do engage in overconsumption, resulting in the neglect described above. The partners of adults who engage in such overconsumption sometimes called gamer widows, analogous to soccer widows, denoting that they have a relationship with a gamer who pays more attention to the game than to them. Whereas there is no doubt that addiction to video games is a real social phenomenon, there is less certainty that playing video games can be correlated with increased aggression, as some have claimed. The preponderance of the evidence seems to indicate, however, that the playing of violent video games can be correlated with increases in aggression, including increases in aggressive thoughts, aggressive feeling, aggressive behaviors, and a desensitization to real-life violence, and a decrease in helpful behaviors (Carnagey, Anderson and Bushman, forthcoming; Bartholow, 2005). However, some studies have found no such correlations, and present findings remain controversial. Whatever the precise relation between violent video games and aggression turns out to be, it is clear now that there is a huge difference between the way that children are taught to behave towards others by their parents and how they learn to behave in violent video games. This at least raises the question of how their understanding of and attitude towards violence and aggression is influenced by violent video games. A third hypothesized ill of video games is that they cause individuals to be socially and cognitively slighted and maladjusted. This maladjustment is attributed in part to the neglect of studies and social relations due to an overindulgence into video games and to increased aggression levels from playing violent games. But it is also held to be due to the specific skills and understandings that users gain from video games. Children that play video games are exposed to conceptions of human relations and the workings of the world that have been designed into them by game developers. These conceptions have not been designed to be realistic or pedagogical, and often rely on stereotypes and simplistic modes of interaction and solutions to problems. It is therefore conceivable that children develop ideas and behavioral routines while playing computer games that leave much to be desired. The case in favor of computer games begins with the observation that they are a new and powerful medium that brings users pleasure and excitement, and that allow for new forms of creative expression and new ways of acting out fantasies. Games moreover do not just cause social isolation, they can also stimulate social interaction. Playing multiplayer games is a social activity that involves interactions with other players, and that can even help solitary individuals find new friends. Computer games may moreover induce social learning and train social skills. This is especially true for role-playing games and games that involve verbal interactions with other characters. Such games let players experiment with social behavior in different social settings, and role-playing game can also make users intimately familiar with the point of view and experiences of persons other than themselves. Computer games have moreover been claimed to improve perceptual, cognitive and motor skills, for example by improving hand-eye coordination and improving visual recognition skills (Johnson, 2005; Green and Bavelier, 2003). Computer Games and Values It has long been argued in computer ethics that computer systems and software are not value-neutral but are instead value-laden (Nissenbaum, 1998; Brey, 2000). Computer games are no exception. Computer games may suggest, stimulate, promote or reward certain values while shunning or discouraging others. Computer games are value-laden, first of all, in the way they represent the world. As discussed, earlier, such representations may contain a variety of biases. They may, for example, promote racial and gender stereotypes (Chan, 2005; Ray, 2003), and they may contain implicit, biased assumptions about the abilities, interests or gender of the player. Simulation games like SimCity may suggest all kinds of unproven causal relations, for example correlations between poverty and crime, that may help shape attitudes and feed prejudices. Computer games may also be value-laden in the interactions that they make possible. They may, for example, be designed to make violent action the only solution to problems faced by a player. Computer games can also be value-laden in the storylines they suggest for players and in the feedback and rewards that are given. Some first-person shooters awards extra points, for example, for not killing innocent bystanders, whereas others instead award extra points for killing as many as possible. A popular game like The Sims can serve to illustrate how values are embedded in games. The Sims is a game that simulates the everyday lives and social relationships of ordinary persons. The goal of characters in the game is happiness, which is attained through the satisfaction of needs like Hunger, Comfort, Hygiene and Fun. These needs can be satisfied through success in one’s career, and through consumption and social interaction. As Miguel Sicart (2003) has argued, The Sims thus presents an idealized version of a progressive liberal consumer society in which the goal in life is happiness, gained by being a good worker and consumer. The team-based first-person shooter America’s Army presents another example. This game is offered as a free download by the U.S. government, who uses it to stimulate U.S. army recruitment. The game is designed to give a positive impression of the U.S. army. Players play as servicemen who obey orders and work together to combat terrorists. The game claims to be highly realistic, yet it has been criticized for not showing certain realistic aspects of military life, such as collateral damage, harassment, and gore. The question is how influential computer games actually are in influencing the values of players. The amount of psychological research done of this topic is still limited. However, psychological research on the effect of other media, such as television, has shown that it is very influential in affecting the value of media users, especially children. Since many children are avid consumers of computer games, there are reasons to be concerned about the values projected on them by such games. Children are still involved in a process of social, moral and cognitive development, and computer games seem to have an increasingly large role in this developmental process. Concern about the values embedded in video games therefore seems warranted. On the other hand, computer games are games, and therefore should allow for experimentation, fantasy, and going beyond socially accepted boundaries. The question is how games can support such social and moral freedom without also supporting the development of skewed values in younger players. Players do not just develop values on the basis of the structure of the game itself, they also develop them by interacting with other players. Players communicate messages to each other about game rules and acceptable in-game behavior. They can respond positively or negatively to certain behaviors, and may praise or berate other players. In this way, social interactions in games may become part of the socialization of individuals and influence their values and social beliefs. Some of these values and norms may remain limited to the game itself, for example, norms governing the permissibility of cheating (Kimppa and Bissett, 2005). In some games, however, like massively multiplayer online role-playing games (MMORPG’s), socialization processes are so complex as to resemble real life (Warner and Raiter, 2005), and values learned in such games may be applied to real life as well. Computer Games and Gender Games magazine and game advertisement foster the impression that computer games are a medium for boys and men. Most pictured gamers are male, and many recurring elements in images, such as scantily clad, big-breasted women, big guns and fast cars, seem to be geared toward men. The impression that computer games are mainly a medium for men is further supported by usage statistics. Research has consistently shown that fewer girls and women play computer games than boys and men, and those that do spend less time playing than men. According to research performed by Electronic Arts, a game developer, among teenagers only 40% of girls play computer games, compared to 90% of boys. Moreover, when they reach high school, most girls lose interest, whereas most boys keep playing.6 A study by the UK games trade body, the Entertainment and Leisure Publishers Association, found that in Europe, women gamers make up only a quarter of the gaming population. 7 6 Games industry is 'failing women'. BBC News, August 21, 2006. http://news.bbc.co.uk/2/hi/technology/5271852.stm 7 Chicks and Joysticks. An Exploration of Women and Gaming. ELSPA White Paper, September 2004. www.elspa.com/assets/files/c/chicksandjoysticksanexplorationofwomenandgaming_176.pdf The question whether there is a gender bias in computer games is morally significant because it is a question about gender equality. If it is the case that computer games tend to be designed and marketed for men, then women are at an unfair disadvantage, as they consequently have less opportunity to enjoy computer games and their possible benefits. Among such benefits may be greater computer literacy, an important quality in today’s marketplace. But is the gender gap between usage of computer games really the result of gender bias in the gaming industry, or could it be the case that women are simply less interested in computer games than men, regardless of how games are designed and marketed? Most analysts hold that the gaming industry is largely to blame. They point to the fact that almost all game developers are male, and that there have been few efforts to develop games suitable for women. To appeal to women, it has been suggested, computer games should be less aggressive, because women have been socialized to be non-aggressive (Norris, 2004). It has also been suggested that women have a greater interest in multiplayer games, games with complex characters, games that contain puzzles, and games that are about human relationships. Games should also avoid assumptions that the player is male and avoid stereotypical representations of women. Few existing games contain good role models for women. Studies have found that most female characters in games have unrealistic body images and display stereotypical female behaviors, and that a disproportionate number of them are prostitutes and strippers. 8 Virtual Reality, Simulation and Professional Ethics In discussing issues of professional responsibility in relation to virtual reality systems and computer simulations, a distinction can be made between the responsibility of developers of such systems and that of professional users. Professional users can be claimed to have a responsibility to acquaint themselves with the technology and its potential consequences and to use it in a way that is consistent with the ethics of their profession. The responsibility of developers includes giving consideration to ethical aspects in the design process and engaging in adequate communication about the technology and its effects to potential users. In the development of computer simulations, the accuracy of the simulation and its reliability as a foundation for decision-making in the real world are of paramount importance. The major responsibility of simulation professionals is therefore to avoid misrepresentations where they can and to adequately communicating the limitations of simulations to users (McLeod, 1983). These responsibilities are, indeed, a central ingredient in a recent code of ethics for simulationists adopted by a large number of professional organizations for simulationists (Ören et al., 2002). The responsibility for accuracy entails the responsibility to take proper precautions to ensure that modeling mistakes do not occur, especially when the stakes are high, and to inform users if inaccuracies do or may occur. It also entails the responsibility not to participate in intentional deception of users (e.g., embellishment, dramatization, or censorship). 8 Fair Play: Violence, Gender and Race in Video Games. Children Now , December 2001. 36 pp. http://publications.childrennow.org/ In Brey (1999), I have argued that designers of simulations and virtual environments also have a responsibility to incorporate proper values into their creations. It has been argued earlier that representations and interfaces are not value-free but may contain values and biases. Designers have a responsibility to reflect on the values and biases contained in their creations and to ensure that they do not violate important ethical principles. The responsibility to do this follows from the ethical codes that are in use in different branches of engineering and computer science, especially the principle that professional expertise should be used for the enhancement of human welfare. If technology is to promote human welfare, it should not contain biases and should regard the values and interests of stakeholders or society at large. Taking into account such values and avoiding biases in design cannot be done without a proper methodology. Fortunately, a detailed proposal for such a methodology has recently been made by Batya Friedman and her associates, and has been termed value-sensitive design (Friedman, Kahn and Borning, 2006). Special responsibilities apply to different areas of applications for VR and computer simulations. The use of virtual reality in therapy and psychotherapy, for example, requires special consideration to principles of informed consent and the ethics of experimentation with human subjects (Wiederhold and Wiederhold, 2004). The computer and video game industry can be argued to have a special responsibility to consider the social and cultural impact of their products, given that they are used by a mass audience that includes children. Arguably, game developers should consider the messages that their products send to users, especially children, and should work to ensure that they develop and market content that is age-appropriate and that is more inclusive of all genders. Virtual reality and computer simulation will continue to present new challenges for ethics, because new and more advanced applications are still being developed, and their use is more and more widespread. Moreover, as has been argued, virtual environments can mimic many of the properties of real life, and therefore contain many of the ethical dilemmas found in real life. It is for this reason that they will not just continue to present new ethical challenges for professional developers and users, but also for society at large.

+ نوشته شده در ساعت توسط ... |

Virtual Reality and Computer Simulation

Virtual Reality and Computer Simulation

Introduction

 

Virtual reality and computer simulation have not received much attention from ethicists. It is argued in this essay that this relative neglect is unjustified, and that there are important ethical questions that can be raised in relation to these technologies. First of all, these technologies raise important ethical questions about the way in which they represent reality and the misrepresentations, biased representations and offensive representations that they may contain. In addition, actions in virtual environments can be harmful to others, and can also be morally problematic from the point of view of deontological and virtue ethics. Although immersive virtual reality systems are not yet used on a large scale, nonimmersive virtual reality is regularly experienced by hundreds of millions of users, in the form of computer games and virtual environments for exploration and social networking. These forms of virtual reality also raise ethical questions regarding their benefits and harms to users and society, and the values and biases contained in them. This paper has the following structure. The next section will describe what virtual reality and computer simulations are and what the current applications of these technologies are. This is followed by a section that analyzes the relation between virtuality and reality, and asks whether virtuality can and should function as a substitute for reality. Three subsequent sections discuss ethical aspects of representation in virtual reality and computer simulations, the ethics of behavior in virtual reality and the ethics of computer games. A concluding section discusses issues of professional ethics in the development and professional use of virtual reality systems and computer simulations. Background: The Technology and Its Applications Virtual reality Virtual reality (VR) technology emerged in the 1980s, with the development and marketing of systems consisting of a head mounted display (HMD) and datasuit or dataglove attached to a computer. These technologies simulated three-dimensional (3-D) environments displayed in surround stereoscopic vision on the head mounted display. The user could navigate and interact with simulated environments through the datasuit and dataglove, items that tracked the positions and motions of body parts and allowed the computer to modify its output depending on the recorded positions. This original technology has helped define what is often meant by “virtual reality”: an immersive, interactive three-dimensional computer-generated environment in which interaction takes place over multiple sensory channels and includes tactile and positioning feedback. According to Sherman and Craig (2003), there are four essential elements in virtual reality: a virtual world, immersion, sensory feedback, and interactivity. A virtual world is a description of a collection of objects in a space and rules and relationships governing these objects. In virtual reality systems, such virtual worlds are generated by a computer. Immersion is the sensation of being present in an environment, rather than just observing an environment from the outside. Sensory feedback is the selective provision of sensory data about the environment based on user input. The actions and position of the user provide a perspective on reality and determines what sensory feedback is given. Interactivity, finally, is the responsiveness of the virtual world to user actions. Interactivity includes the ability to navigate virtual worlds and to interact with objects, characters and places. These four elements can be realized to a greater and lesser degree with a computer, and that is why there are both broad and narrow definitions of virtual reality. A narrow definition would only define fully immersive and fully interactive virtual environments as VR. However, there are many virtual environments that do not meet all these criteria to the fullest extent possible, but that could still be categorized as VR. Computer games played on a desktop with a keyboard and mouse, like Doom and HalfLife, are not fully immersive, and sensory feedback and interactivity in them is more limited than in immersive VR systems that include a head mounted display and datasuit. Yet, they do present virtual worlds that are immersive to an extent, and that are interactive and involve visual and auditory feedback. In Brey (1999) I therefore proposed a broader definition of virtual reality, as a three-dimensional interactive computergenerated environment that incorporates a first-person perspective. This definition includes both immersive and nonimmersive (screen-based) forms of VR. The notion of a virtual world, or virtual environment, as defined by Sherman and Craig, is broader than that of virtual reality. A virtual world can be realized by means of sensory feedback, in which case it yields virtual reality, but it can also be realized without it. Classical text-based adventure games like Zork, for example, play in interactive virtual worlds, but users are informed about the state of this world through text. They provide textual inputs, and the game responds with textual information rather than sensory feedback about changes in the world. A virtual world is hence an interactive computer-generated environment, and virtual reality is a special type of virtual world that involves location- and movement-relative sensory feedback. Next to the term “virtual reality,” there is the term “virtuality” and its derivative adjective “virtual”. This term has a much broader meaning than the term “virtual reality” or even “virtual environment”. As explained more extensively in the next section, the term “virtual” refers to anything that is created or carried by a computer and that mimics a “real”, physically localized entity, as in “virtual memory” and “virtual organization”. In this essay, the focus will be on virtual reality and virtual environments, but occasionally, especially in the next section, the broader phenomenon of virtuality will be discussed as well. Returning to the topic of virtual reality, a distinction can be made between singleuser and multi-user or networked VR. In single-user VR, there is only one user, whereas in networked VR, there are multiple users who share a virtual environment and appear to each others as avatars, which are graphical representations of the characters played by users in VR. A special type of VR is augmented reality, in which aspects of simulated virtual worlds are blended with the real world that is experienced through normal vision or a video link, usually through transparent glasses on which computer graphics or data are overlaid. Related to VR, furthermore, are telepresence and teleoperator systems, systems that extend a person’s sensing and manipulation capability to a remote location by displaying images and transmitting sounds from a real environment that can (optionally) be acted on from a distance through remote handling systems such as robotic arms. Computer simulation A computer simulation is a computer program that contains a model of a particular system (either actual or theoretical) and that can be executed, after which the execution output can be analyzed. Computer simulation is also the name of the discipline in which such models are designed, executed and analyzed. The models in computer simulations are usually abstract and either are or involve mathematical models. Computer simulation has become a useful part of the mathematical modeling of many natural systems in the natural sciences, human systems in the social sciences, and technological systems in the engineering sciences, in order to gain insight into the operations of these systems and to study the effects of alternative conditions and courses of action. It is not usually an aim in computer simulations, as it is in virtual reality, to do realistic visual modeling of the systems that they simulate. Some of these systems are abstract, and even for those systems that are concrete, the choice is often made not to design graphic representations of the system, but to rely solely on abstract models of it. When graphical representations of concrete systems are used, they usually only represent features that are relevant to the aims of the simulation, and do not aspire to the realism and detail aspired to in virtual reality. Another difference with virtual reality is that computer simulations need not be interactive. Usually, simulators will determine a number of parameters at the beginning of a simulation and then “run” the simulation without any interventions. In this standard case, the simulator is not himself defined as part of the simulation, as would happen in virtual reality. An exception is an interactive simulation, which is a special kind of simulation, also referred to as a human-in-the-loop simulation, in which the simulation includes a human operator. An example of such a simulation would be a flight simulator. If a computer simulation is interactive and makes use of three-dimensional graphics and sensory feedback, it also qualifies as a form of virtual reality. Sometimes, also, the term “computer simulation” is used to include any computer program that models a system or environment, even if it is not used to gain insight into the operation of a system. In that broad sense, virtual environments, at least those that aim to do realistic modeling, would also qualify as computer simulations. Applications VR is used to simulate both real and imaginary environments. Traditional VR applications are found in medicine, education, arts and entertainment, and the military (Burdea and Coiffet, 2003). In medicine, VR is used for the simulation of anatomical structures and medical procedures in education and training, for example for performing virtual surgery. Increasingly, VR is also being used for (psycho)therapy, for instance for overcoming anxiety disorders by confronting patients with virtual anxiety-provoking situations (Wiederhold and Wiederhold, 2004). In education, VR is used in explorationbased learning and learning by building virtual worlds. In the arts, VR is used to create new art forms and to make the experience of existing art more dynamic and immersive. In entertainment, mostly nonimmersive, screen-based forms of VR are used in computer and video games and arcades. This is a form of VR that many people experience on a regular basis. In the military, finally, VR is used in a variety of training contexts for army, navy and air force. Emerging applications of VR are found in manufacturing, architecture, and training in a variety of (dangerous) civilian professions. Computer simulations are used in the natural and social sciences to gain insight into the functioning of natural and social systems, and in the engineering sciences for performance optimization, safety engineering, training and education. They are used on a large scale in the natural and engineering sciences, where such fields have sprung up as computational physics, computational neuroscience, computational fluid mechanics, computational meteorology and artificial life. They are also used on a somewhat more modest scale in the social sciences, for example in the computational modeling of cognitive processes in psychology, in the computational modeling of artificial societies and social processes, in computational economic modeling, and in strategic management and organizational studies. Computer simulations are increasingly used in education and training, to familiarize students with the workings of systems and to teach them to interact successfully with such systems. Virtuality and Reality The Distinction between the Virtual and the Real In the computer era, the term “virtual” is often contrasted with “real”. Virtual things, it is often believed, are things that only have a simulated existence on a computer, and are therefore not real, like physical things. Take, for example, rocks and trees in a virtual reality environment. They may look like real rocks and trees, but we know that they have no mass, no weight, and no identifiable location in the physical world, and are just illusions generated through electrical processes in microprocessors and the resulting projection of images on a computer screen. “Virtual” hence means “imaginary”, “makebelieve”, “fake”, and contrasts with “real”, “actual” and “physical”. A virtual reality is therefore always only a make-believe reality, and can as such be used for entertainment or training, but it would be a big mistake, in this view, to call anything in virtual reality real, and to start treating it as such. This popular conception of the contrast between virtuality and reality can, however, be demonstrated to be incorrect. “Virtual” is not the perfect opposite of “real”, and some things can be virtual and real at the same time. To see how this is so, let us start by considering the semantics of “virtual”. The word “virtual” has two traditional, pre-computer meanings. On the first, most salient meaning, it refers to things that have certain qualities in essence or in effect, but not in name. For instance, if only a few buffalo are left, one can say that buffalo are virtually extinguished, extinguished for all practical purposes, even though they are not formally or actually extinguished. Virtual can also mean imaginary, and therefore not real, as in optics, where reference is made to virtual foci and images. Notice that only on the second, least salient meaning, “virtual” contrasts with “real”. On the more salient meaning, it does not mean “unreal” but rather “practically but not formally real”. In the computer era, the word “virtual” came to refer to things simulated by a computer, like virtual memory, which is memory that is not actually built into a processor, but nevertheless functioning as such. Later, the scope of the term “virtual” has expanded to include anything that is created or carried by a computer and that mimics a “real” equivalent, like a virtual library and a virtual group meeting. The computer-based meaning of “virtual” conforms more with the traditional meaning of “virtual” as “practically but not formally real” than with “unreal”. Virtual memory, for example, is not unreal memory, but rather a simulation of physical memory that can effectively function as real memory. Under the above definition of “virtual” as “created or carried by a computer and mimicking a “real” equivalent,” virtual things and processes are imitations of real things, but this need not also preclude them from being real themselves. A virtual game of chess, for example, is also a real game of chess. It is just not played with a physically realized board and pieces. I have argued (Brey, 2003) that a distinction can be made between two types of virtual entities: simulations and ontological reproductions. Simulations are virtual versions of real-world entities that have a perceptual or functional similarity to them, but that do not have the pragmatic worth or effects of the corresponding real-world equivalent. Ontological reproductions are computer imitations of real-world entities that have (nearly) the same value or pragmatic effects as their real world counterparts. They hence have a real-world significance that extends beyond the domain of the virtual environment and that is roughly equal to that of their physical counterpart. To appreciate this contrast, consider the difference between a virtual chess game and a virtual beer. A virtual beer is necessarily a mere simulation of a real beer: it may look much like a real one, and may be lifted and consumed in a virtual sense, but it does not provide the taste and nourishment of a real beer and will never get one drunk. A virtual chess game, in contrast, may lack the physical sensation of moving real chess pieces on a board, but this sensation is considered peripheral to the game, and in relevant other respects, playing virtual chess is equivalent to playing chess with physical pieces. This is not to say that the distinction between simulations and ontological reproductions is unproblematic; it is ultimately a pragmatic distinction, and a virtual entity will be classified as one or the other depending on whether it is judged to share enough of the essential features of its physical counterpart. In Brey (2003), I argued that two classes of physical objects and processes can be ontologically reproduced on computers. A first class consists of physical entities that are defined in terms of visual, auditory or computational properties that can be fully realized on multimedia computers. Such entities include images, movies, musical pieces, stereo systems and calculators, which are all such that a powerful computer can successfully reproduce their essential physical or formal properties. A second class consists of what John Searle (1995) has called institutional entities, which are entities that are defined by a status or function that has been assigned to them within a social institution or practice. Examples of institutional entities are activities like buying, selling, voting, owning, chatting, playing chess, trespassing and joining a club, and requisite objects like contracts, money, letters and chess pieces. Most institutional entities are not dependent on a physical medium, because they are only dependent on the collective assignment of a status or function. For instance, we call certain pieces of paper money not because of their inherent physical nature but because we collectively assign monetary value to them. But we could also decide, and have decided, to assign the same status to certain sequences of bits that float around on the Internet. In general, if an institutional entity exists physically, it can also exist virtually. Therefore, many of our institutions and institutional practices, whether social, cultural, religious or economic, can exist in virtual or electronic form. It can be concluded that many virtual entities can be just as real as their physical counterparts. Virtuality and reality are therefore not each others opposites. Nevertheless, a large part of ordinary reality, that includes most physical objects and processes, cannot be ontologically reproduced in virtual form. In addition, institutional virtual entities can both possess and lack real-world implications. Sometimes virtual money can also be used as real money, whereas at other times, it is only a simulation of real money. People can also disagree on the status of virtual money, with some accepting it as legal tender, and others distrusting it. The ontological distinction between reality and virtuality is for these reasons confusing, and the ontological status of encountered virtual objects will often be not immediately clear. Is the Distinction Disappearing? Some authors have argued that the emergence of computer-generated realities is working to erase the distinction between simulation and reality, and therefore between truth and fiction. Jean Baudrillard (1995), for example, has claimed that information technology, media, and cybernetics have yielded a transition from an era of industrial production to an era of simulation, in which models, signs and codes mediate access to reality and define reality to the extent that it is no longer possible to make any sensible distinction between simulations and reality, so that the distinction between reality and simulation has effectively collapsed. Similarly, Albert Borgmann (1999) has argued that virtual reality and cyberspace have lead many people to confuse them for alternative realities that have the same actuality of the real world, thus leading to a collapse of the distinction between representation and reality, whereas according to him VR and cyberspace are merely forms of information and should be treated as such. Philip Zhai (1998), finally, has argued that there is no principled distinction between actual reality and virtual reality, and that with further technological improvements in VR, including the addition of functional teleoperation, virtual reality could be made totally equivalent to actual reality in its functionality for human life. Effectively, Zhai is arguing that any real-world entity can be ontologically reproduced in VR given the right technology, and that virtual environments are becoming ontologically more like real environments as technology progresses. Are these authors right that the distinction between virtuality and reality, and between simulation and reality, is disappearing? First, it is probably true that there is increasingly less difference between the virtual and the real. This is because, as has already been argued, many things are virtual and real at the same time. Moreover, the number of things that are both virtual and real seem to be increasing. This is because as the possibilities of computers and computer networks increase, more and more physical and institutional entities are reproduced in virtual form. There is a flight to the digital realm, in which many believe it is easier and more fun to buy and sell, listen to music or look at art, or do your banking. For many people, therefore, an increasingly large part of their real lives is also virtual, and an increasingly large part of the virtual is also real. Even if virtuality and reality are not opposite concepts, simulation and reality, and representation and reality, certainly are. Are these two distinctions disappearing as well? Suggesting that they at least become more problematic is the fact that more and more of our knowledge of the real world is mediated by representations and simulations, whether they are models in science, raw footage and enactments in broadcast news, and stories and figures in newspapers or on the Internet. Often, it is not possible, in practice or in principle, to verify the truth or accuracy of these representations through direct inspection of the corresponding state-of-affairs. Therefore, one might argue that these representations become reality for us, for they are all the reality we know. In addition, the distinction between recordings and simulations is becoming more difficult to make. Computer technology has made it easy to manipulate photos, video footage and sound recordings, and to generate realistic imagery, so that it is nowadays often unclear of photographic images or video footage on the Internet or in the mass media whether they are authentic or fabricated or enacted. The trend in mass media towards “edutainment” and the enactment and staging of news events has further problematized the distinction. Yet, all this does not prove that the distinction between simulation/representation and reality has collapsed. People do not get all of their information from media representations. They also move around and observe the world for themselves. People still question and critically investigate whether representations are authentic or correspond to reality. People hence still maintain an ontological distinction, even though it has become more difficult epistemologically to discern whether things and events are real or simulated. Zhai’s suggestion that the distinction could be completely erased through further perfection of virtual reality technology is unlikely to hold because it is unlikely that virtual reality could ever fully emulate actual reality in its functionality for human life. Virtual reality environments cannot, after all, sustain real biological processes, and therefore they can never substitute for the complete physical world. Evaluating the Virtual as a Substitute for the Real Next to the ontological and epistemological questions regarding distinction between the virtual and the real and how we can know this distinction, there is the normative question of how we should evaluate virtuality as a substitute for reality. First of all, are virtual things better or worse, more or less valuable, than their physical counterparts? Some authors have argued that they are in some ways better: they tend to be more beautiful, shiny and clean, and more controllable, predictable, and timeless. They attain, as Michael Heim (1983) has argued, a supervivid hyper-reality, like the ideal forms of platonism, more perfect and permanent than the everyday physical world, answering to our desire to transcend our mortal bodies and reach a state of permanence and perfection. Virtual reality, it may seem, can help us live lives that are more perfect, more stimulating and more in accordance with our fantasies and dreams. Critics of virtuality have argued that the shiny, polished objects of VR are mere surrogates: simplified and inferior substitutes for reality that lack authenticity. Albert Borgmann (1999), for example, has argued that virtuality is an inadequate a substitute for reality, because of its fundamental ambiguity and fragility, and lacks the engagement and splendor of reality. He also argues that virtuality threatens to alter our perspective on reality, causing us to see it as yet another sign or simulation. Hubert Dreyfus (2001) has argued that presence in VR and cyberspace gives a disembodied and therefore false experience of reality and that even immersive VR and telepresence present one with impoverished experiences. Another criticism of the virtual as a substitute for the real is that investments in virtual environments tend to correlate with disinvestments in people and activities in real life (Brey, 1998). Even if this were to be no loss to the person making the disinvestments, it may well be a loss to others affected by it. If a person takes great effort in caring for virtual characters, he or she may have less time left to give similar care and emotional attention to actual persons and animals, or may be less interested in giving it. In this way, investments in VR could lead to a neglect of real life and therefore a more solitary society. On the other hand, virtual environments can also be used to vent aggression, harming only virtual characters and property and possibly preventing similar actions in real life. Representation and Simulation: Ethical Issues VR and computer simulations are representational media: they represent real or fictional objects and events. They do so by means of different types of representations: pictorial images, sounds, words and symbols. In this section, ethical aspects of such representations will be investigated. It will be investigated whether representations are morally neutral and whether their manufacture and use in VR and computer simulations involves ethical choices. Misrepresentations, Biased Representations and Indecent Representations I will argue that representations in VR or computer simulations can become morally problematic for any of three reasons. First, they may cause harm by failing to uphold standards of accuracy. That is, they may misrepresent reality. Such representations will be called misrepresentations. Second, they may fail to uphold standards of fairness, thereby unfairly disadvantaging certain individuals or groups. Such representations will be called biased representations. Third, they may violate standards of decency and public morality. I will call such representations indecent representations. Misrepresentation in VR and computer simulation occurs when it is part of the aim of a simulation to realistically depict aspects of the real world, yet the simulation fails to accurately depict these features (Brey, 1999). Many simulations aim to faithfully depict existing structures, persons, state-of-affairs, processes or events. For example, VR applications have been developed that simulate in great detail the visual features of existing buildings such as the Louvre or Taj Mahal or the behavior of existing automobiles or airplanes. Other simulations do not aim to represent particular existing structures, but nevertheless aim to be realistic in their portrayal of people, things and events. For example, a VR simulation of military combat will often be intended to contain realistic portrayals of people, weaponry and landscapes without intending to represent particular individuals or a particular landscape. When simulations aim to be realistic, they are expected to live up to certain standards of accuracy. These are standards that define the degree of freedom that exist in the depiction of a phenomenon, and that specify what kinds of features must be included in a representation for it to be accurate, what level of detail is required, and what kinds of idealizations are permitted. Standards of accuracy are fixed in part by the aim of a simulation. For example, a simulation of surgery room procedures should be highly accurate if it is used for medical training, somewhat accurate when sold as edutainment, and need not be accurate at all when part of a casual game. Standard of accuracy can also be fixed by promises or claims made by manufacturers. For example, if a game promises that surgery room procedures in it are completely realistic, the standards of accuracy for the simulation of these procedures will be high. People may also disagree about the standards of accuracy that are appropriate for a particular simulation. For example, a VR simulation of military combat that does not represent killings in graphic detail may be discounted as inaccurate and misleading by anti-war activists, but may be judged to be sufficiently realistic for the military for training purposes. Misrepresentations of reality in VR and computer simulations are morally problematic to the extent that they can result in harm. The greater these harms are, and the greater the chance that they occur, the greater the moral responsibility of designers and manufacturers to ensure accuracy of representations. Obviously, inaccuracies in VR simulations of surgical procedures for medical training or computer simulations to test the bearing power of bridges can lead to grave consequences. A misrepresentation of the workings of an engine in educational software causes a lesser or less straightforward harm: it causes students to have false beliefs, some of which could cause harms at a later point in time. Biased representations constitute a second category of morally problematic representations in VR modeling and computer simulation (Brey, 1999). A biased representation is a representation that unfairly disadvantages certain individuals or groups or that unjustifiably promotes certain values or interests over others. A representation can be biased in the way it idealizes or selectively represents phenomena. For example, a simulation of global warming may be accurate overall but unjustifiably ignore the contribution to global warming made by certain types of industries or countries. Representations can also be biased by stereotyping people, things and events. For example, a computer game may contain racial or gender stereotypes in its depiction of people and their behaviors. Representations can moreover be biased by containing implicit assumptions about the user, as in a computer game that plays out male heterosexual fantasies, thereby assuming that players will generally be male and heterosexual. They can also be biased by representing affordances and interactive properties in objects that make them supportive of certain values and uses but not of others. For example, a gun in a game may be designed so that it can used to kill but not to knock someone unconscious. Indecent representations constitute a third and final category of morally problematic representations. Indecent representations are representations that are considered shocking or offensive or that are held to break established rules of good behavior or morality and that are somehow shocking to the senses or moral sensibilities. Decency standards vary widely across different individuals and cultures however, and what is shocking or immoral to some will not be so to others. Some will find any depiction of nudity, violence or physical deformities indecent, whereas others will find any such depiction acceptable. The depiction of particular acts, persons or objects may be considered blasphemous in certain religions but not outside these religions. For this reason, the notion of an indecent representation is a relative notion, and there will usually be disagreement about what representations count as indecent. In addition, the context in which representation take place may also influence whether it is considered decent. For example, the representation of open heart surgery, with some patients surviving the procedure but others dying on the operation table, may be inoffensive in the context of a medical simulator, but offensive in the context of a game that makes light of such a procedure. Virtual Child Pornography Pornographic images and movies are considered indecent by many, but there is a fairly large consensus that people have a right to produce pornography and use it in private. Such a consensus does not consist for certain extreme forms of pornography, including child pornography. Child pornography is considered wrong because it harms the children that are used to produce it. But what about virtual child pornography? Virtual child pornography is the digital creation of images or animated pictures that depict children engaging in sexual activities or that depict them in a sexual way. Nowadays, such images and movies can be made to be highly realistic. No real children are abused in this process, and therefore the major reason for outlawing child pornography does not apply for it. Does this mean that virtual child porn is morally permissible and that its production and consumption should be legal? The permissibility of virtual child porn has been defended on the argument that no actual harm is done to children and that that people have a right to free speech by which they should be permitted to produce and own virtual child pornography, even if others find such images offensive. Indeed, the U.S. Supreme Court struck down a congressional ban on virtual child porn in 2002 with the argument that this ban constituted too great a restriction on free speech. The court also claimed that no proof had been given of a connection between computer-generated child pornography and the exploitation of actual children. An additional argument that is sometimes used in favor of virtual child porn is that its availability to pedophiles may actually decrease the chances that they will harm children. Opponents of virtual child porn have sometimes responded with deontological arguments, claiming that it is degrading to children and undermines human dignity. Such arguments cut little ice, however, in a legal arena that is focused on individual rights and harms. Since virtual child porn does not seem to violate the individual rights, opponents have tried out various arguments to the effect that it does cause harm. One existing argument is that virtual child porn causes indirect harm to children because it encourages child abuse. This argument goes opposite the previously stated argument that virtual child porn should be condoned because it makes child abuse less likely. The problem is that it is very difficult to conduct studies that provide solid empirical evidence for either position. Another argument is that failing to criminalize virtual child porn will harm children because it makes it difficult to enforce laws that prohibit actual child pornography. This argument has been used often by law enforcers to criminalize virtual child porn. As Neil Levy (2002) has argued, this argument is however not plausible, amongst other reasons because experts are usually able to make the distinction between virtual and actual pictures. Levy’s own argument against virtual child porn is not that it will indirectly harm children, but that it may ultimately harm women by eroticizing inequality in sexual relationships. He admits, however, that he lacks the empirical evidence to back up this claim. Per Sandin (2004) has presented an argument with better empirical support, which is that virtual child porn should be outlawed because it causes significant harm to a great many people who are revulsed by it. The problem with this argument, however, is that it gives too much weight to harm caused by offense. If actions should be outlawed whenever they offend a large group of people, then individual rights would be drastically curtailed, and many things, ranging from homosexual behavior to interracial marriage, would still be illegal. It can be concluded that virtual child pornography will remain a morally controversial issue for some time to come, as no decisive arguments for or against it have been provided so far. Depiction of Real Persons Virtual environments and computer simulations increasingly include characters that are modeled after the likeness of real persons, whether living or deceased. Also, films and photographs increasingly include manipulated or computer-generated images of real persons who are placed in fictional scenes or are made to perform behaviors that they have not performed in real life. Such appropriations of likenesses are often made without the person’s consent. Is such consent morally required, or should the depictions of real persons be seen as an expression of artistic freedom or free speech? Against arguments for free speech, three legal and moral arguments have traditionally been given for restrictions on the use of someone’s likeness (Tabach-Bank, 2004). First, the right to privacy has been appealed to. It has been argued that the right to privacy includes a right to live a life free from unwarranted publicity (Prosser, 1960). The public use of someone’s likeness can violate someone’s privacy by intruding upon his seclusion or solitude or into his private affairs, by working to publicly disclose embarrassing private facts about him, or to place him in a false light in the public eye. A second argument for restricting the use of someone’s likeness is that it can be used for defamation. Depicting someone in a certain way, for example as being involved in immoral behavior or in a ridiculous situation, can defame him by harming his public reputation. In some countries, like the U.S., there is also a separate recognized right of publicity. The right to publicity is an individual's right to control and profit from the commercial use of his name, likeness and persona. The right to publicity has emerged as a protection of the commercial value of the identity of public personalities, or celebrities, who frequently use their identity to sell or endorse products or services. It is often agreed that celebrities have less of an expectation of privacy, because they are public personalities, but have a greater expectation of a right to publicity. In the use of the likenesses of real persons in virtual environments or doctored digital images, rights to free speech, freedom of the press and freedom of artistic expression will therefore have to be balanced against the right to privacy, the right of publicity and the right to protection from defamation. Behavior in Virtual Environments: Ethical Issues The preceding section focused on ethical issues in design and embedded values in VR and computer simulations. This section focuses on ethical issues in the use of VR and interactive computer simulations. Specifically, the focus will be on the question whether actions within the worlds generated by these technologies can be unethical. This issue will be analyzed for both single-user and multi-user systems. Before it will be taken up, it will first be considered how actions in virtual environments take place, and what the relation is between users and the characters as which they appear in virtual environments. Avatars, Agency and Identity In virtual environments, users assume control over a graphically realized character called an avatar. Avatars can be built after the likeness of the user, but more often, they are generic persons or fantasy characters. Avatars can be controlled from a first-person perspective, in which the user sees the world through the avatar’s eyes, or from a thirdperson perspective. In multi-user virtual environments, there will be multiple avatars corresponding to different users. Virtual environments also frequently contain bots, which are programmed or scripted characters that behave autonomously and are controlled by no one. The identity that users assume in a virtual environment is a combination of the features of the avatar they choose, the behaviors that they choose to display with it, and the way others respond to the avatar and its behaviors. Avatars can function as a manifestation of the user, who behaves and acts like himself, and to whom others respond as if it is the user himself, or as a character that has no direct relation to the user and that merely plays out a role. The actions performed by avatars can therefore range from authentic expressions of the personality and identity of the user to experimentation with identities that are the opposite of who the user normally is. Whether or not the actions of an avatar correspond with how a user would respond in real life, there is no question that the user is causally and morally responsible for actions performed by his or her avatar. This is because users normally have full control over the behavior of their avatars through one or more input devices. There are occasional exceptions to this rule, because avatars are sometimes taken over by the computer and then behave as bots. The responsibility for the behavior of bots could be assigned to either their programmer or to whomever introduced them into a particular environment, or even to the programmer of the environment for not disallowing harmful actions by bots (Ford, 2001). Behavior in Single-User VR Single-user VR offers much less possibilities for unethical behavior than multi-user VR, because there are no other human beings that could be directly affected by the behavior of a user. The question is if there are any behaviors in single-user VR that could qualify as unethical. In Brey (1999), I considered the possibility that certain actions that are unethical when performed in real life could also be unethical when performed in singleuser VR. My focus was particularly on violent and degrading behavior towards virtual human characters, such as murder, torture and rape. I considered two arguments for this position, the argument from moral development and the argument from psychological harm. According to the argument from moral development, it is wrong to treat virtual humans cruelly because doing so will make it more likely that we will treat real humans cruelly. The reason for this is that the emotions appealed to in the treatment of virtual humans are the same emotions that are appealed to in the treatment of real humans, because these actions resemble each other so closely. This argument has recently gained empirical support (Slater et al., 2006). The argument from psychological harm is that third parties may be harmed by the knowledge or observation that people engage in violent, degrading or offensive behavior in single-user VR and that therefore this behavior is immoral. This argument is similar to the argument attributed to Sandin in my earlier discussion of indecent representations. I claimed in Brey (1999) that although harm may be caused by particular actions in single-user VR because people may be offended by them, it does not necessarily follow that the actions are immoral, but only that they cause indirect harm to some people. One would have to balance such harms against any benefits, such as pleasurable experiences to the user. Matt McCormick (2001) has offered yet another argument according to which violent and degrading behavior in single-user VR can be construed as unethical. He argues that repeated engagement in such behavior erodes one’s character and reinforces virtueless habits. He follows Aristotelian virtue ethics in arguing that this is bad because it makes it difficult for us to lead fulfilling lives, because as Aristotle has argued, a fulfilling life can only be lived by those who are of virtuous character. More generally, the argument can be made that the excessive use of single-user VR keeps one from leading a good life, even if one’s actions in it are virtuous, because one invests into fictional worlds and fictional experiences that seem to fulfill one’s desires but do not actually do so (Brey, forthcoming). Behavior in Multi-User VR Many unethical behaviors between persons in the real world can also occur in multi-user virtual environments. As discussed earlier in the section on reality and virtuality, there are two classes of real-world phenomena that can also exist in virtual form: institutional entities that derive their status from collective agreements, like money, marriage, and conversations, and certain physical and formal entities, like images and musical pieces, that computers are capable of physically realizing. Consequently, unethical behaviors involving such entities can also occur in VR, and it is possible for there to be real thefts, insults, deceptions, invasions of privacy, breaches of contract, or damage to property in virtual environments. Immoral behaviors that cannot really happen in virtual environments are those that are necessarily defined over physically realized entities. For example, there can be real insult in virtual environments, but not real murders, because real murders are defined over persons in the physical world, and the medium of VR does not equip users with the power to kill persons in the physical world. It may, of course, be possible to kill avatars in VR, but these are of course not killings of real persons. It may also be possible to plan a real murder in VR, for example by using VR to meet up with a hitman, but this cannot then be followed up by the execution of a real murder in VR. Even though virtual environments can be the site of real events with real consequences, they are often recognized as fictional worlds in which character merely play out roles. In such cases, even an insult may not be a real insult, in the sense of an insult made by a real person to another real person, because it may only have the status of an insult between two virtual characters. The insult is then only real in the context of the virtual world, but is not real in the real world. Ambiguities arise, however, because it will not always be clear when actions and events in virtual environments should be seen as fictional or real (Turkle, 1995). Users may assign different statuses to objects and events, and some users may identify closely with their avatar, so that anything that happens to their avatar also happens to them, whereas others may see their avatar as an object detached from themselves with which they do not identify closely. For this reason, some users may feel insulted when their avatar is insulted, whereas others will not feel insulted at all. This ambiguity in the status of many actions and events in virtual worlds can lead to moral confusion as to when an act that takes place in VR is genuinely unethical and when it merely resembles a certain unethical act. The most famous case of this is the case of the “rape in cyberspace” reported by Julian Dibbell (1993). Dibbell reported an instance of a “cyberrape” in LambdaMOO, a text-only virtual environment in which users interact with user-programmable avatars. One user used a subprogram that took control of avatars and made them perform sex acts on each other. Users felt their characters were raped, and some felt that they themselves were indirectly raped or violated as well. But is it ever possible for someone to be raped through a rape of her avatar, or does rape require a direct violation of someone’s body? Similar ambiguities exist for many other immoral practices in virtual environments, like adultery and theft. If it would constitute adultery when two persons were to have sex with each other, does it also constitute adultery when their avatars have sex? When a user steals virtual money or property from other users, should he be considered a thief in real life? Virtual Property and Virtual Economies For any object or structure found in a virtual world, one may ask the question: Who owns it? This question is already ambiguous, however, because there may both be virtual and real-life owners of virtual entities. For example, a user may be the owner of an island in a virtual world, but the whole world, including the island, may be owned by the company that has created it and permits users to act out roles in it. Users may also become creators of virtual objects, structures and scripted events, and some put in hundreds of hours of work into their creations. May they therefore also assert intellectual property rights to their creations? Or can the company that owns the world in which the objects are found and the software with which they were created assert ownership? What kind of framework of rights and duties should be applied to virtual property? (Burk, 2005). The question of property rights in virtual worlds is further complicated by the emergence of so-called virtual economies. Virtual economies are economies that exist within the context of a persistent multi-user virtual world. Such economies have emerged in virtual worlds like Second Life and The Sims Online, and in massively multiplayer online role-playing games (MMORPGs) like Entropia Universe, World of Warcraft, Everquest and EVE Online. Many of these worlds have millions of users. Economies can emerge in virtual worlds if there are scarce goods and services in them for which users are willing to spend time, effort or money, if users can also develop specialized skills to produce such goods and services, if users are able to assert property rights on goods and resources, and if they can transfer goods and services between them. Some economies in these worlds are primitive barter economies, whereas other make use of recognized currencies. Second Life, for example, makes use of the Linden Dollar (L$) and Entropia Universe has the Project Entropia Dollar (PED), both of which have an exchange rate against real U.S. dollars. Users of these worlds can hence choose to acquire such virtual money by doing work in the virtual world (e.g., by selling services or opening a virtual shop) or by making money in the real world and exchanging it with virtual money. Virtual objects are now frequently traded for real money outside the virtual worlds that contain them, on online trading and auction sites like eBay. Some worlds also allow for the trade of land. In December 2006, the average price of a square meter of land in Second Life was L$ 9.68 or U.S. $ 0.014 (up from L$ 6.67 in November), and over 36,000,000 square meters were sold1 Users have been known to pay thousands of dollars for cherished virtual objects, and over $ 100,000 for real estate. The emergence of virtual economies in virtual environments raises the stakes for their users, and increases the likelihood that moral controversies ensue. People will naturally be more likely to act immorally if money is to be made or if valuable property is to be had. In one incident which took place in China, a man lent a precious sword to another man in the online game Legend of Mir 3, who then sold it to a third party. When the lender found out about this, he visited the borrower at his home and killed him.2 Cases have also been reported of Chinese sweatshop laborers who work day and night in conditions of practical slavery to collect resources in games like World of Warcraft and Lineage, which are then sold for real money. 1 Source: https://secondlife.com/whatis/economy_stats.php. Accessed 1/3/2007. 2 Online gamer killed for selling cyber sword. ABC NewsOnline, March 30, 2005. http://www.abc.net.au/news/newsitems/200503/s1334618.htm. There have also been reported cases of virtual prostitution, for instance on Second Life, where users are paid to (use their avatar to) perform sex acts or to serve as escorts. There have also been controversies over property rights. On Second Life, for example, controversy ensued when someone introduced a program called CopyBot that could copy any item in the world. This program wreaked havoc on the economy, undermining the livelihood of thousands of business owners in Second Life, and was eventually banned after mass protests.3 Clearly, then, the emergence of virtual economies and serious investments in virtual property generates many new ethical issues in virtual worlds. The more time, money and social capital people invest in virtual worlds, the more such ethical issues will come to the front. The Ethics of Computer Games Contemporary computer and video games often play out in virtual environments or include computer simulations, as defined earlier. Computer games are nowadays mass media. A recent study shows that the average American 8- to 18-year old spends almost six hours per week playing computer games, and that 83% has access to a video game console at home (Rideout, Roberts and Foehr, 2005). Adults are also players, with four in ten playing computer games on a regular basis.4 In 2005, the revenue in the U.S. generated by the computer and game industry generated was over U.S. $ 7 billion, far surpassing the film industry’s annual box office results.5 Computer games have had a vast impact on youth culture, but also significantly influence the lives of adults. For these reasons alone, an evaluation of their social and ethical aspects is needed. Some important issues bearing on the ethics of computer games have already been discussed in previous sections, and therefore will be covered less extensively here. These include, amongst others, ethical issues regarding biased and indecent representations; issues of responsibility and identity in the relation between avatars, users and bots; the ethics of behavior in virtual environments; and moral issues regarding virtual property and virtual economies. These issues, and the conclusions reached regarding them, all fully apply to computer games. The focus in this section will be on three important ethical questions that apply to computer games specifically: Do computer games contribute to individual well-being and the social good? What values should govern the design and use of computer games? And do computer games contribute to gender inequality? The Goods and Ills of Computer Games Are computer games a benefit to society? Many parents do not think so. They worry about the extraordinary amount of time their children spend playing computer games, and 3 Linden bans CopyBot following resident protests. Reuters News, Wednesday November 15, 2006. http://secondlife.reuters.com/stories/2006/11/15/linden-bans-copybot-following-resident-protests/ 4 Poll: 4 in 10 adults play electronic games. MSNBC.com, May 8, 2006. http://www.msnbc.msn.com/id/12686020/ 5 2006 Essential Facts about the Computer and Video Game Industry, Entertainment Software Association, 2006. http://www.theesa.com/archives/files/Essential%20Facts%202006.pdf about the excessive violence that takes place in many games. They worry about negative effects on family life, schoolwork and the social and moral development of their kids. In the media, there has been much negative reporting about computer games. There have been stories about computer game addiction and about players dying from exhaustion and starvation after playing video games for days on end. There have been stories about ultraviolent and otherwise controversial video games, and the ease by which children can gain access to them. The Columbine High School massacre, in 1999, in which two teenage students went out on a shooting rampage, was reported in the media to have been inspired by the video game Doom, and since then, other mass shootings have also been claimed to have been inspired by video games. Considerable doubt has been raised, therefore, as to whether computer games are indeed a benefit to society rather than a social ill. The case against computer games tends to center on three perceived negative consequences: addiction, aggression and maladjustment. The perceived problem of addiction is that many gamers get so caught up in playing that their health, work or study, family life, and social relations suffer. How large this problem really is has not yet been adequately documented (though see Chiu, Lee and Huang, 2004). There clearly is a widespread problem, as there has been a worldwide emergence of clinics for video addicts in recent years. Not all hard-core gamers will be genuine addicts in the psychiatric sense, but many do engage in overconsumption, resulting in the neglect described above. The partners of adults who engage in such overconsumption sometimes called gamer widows, analogous to soccer widows, denoting that they have a relationship with a gamer who pays more attention to the game than to them. Whereas there is no doubt that addiction to video games is a real social phenomenon, there is less certainty that playing video games can be correlated with increased aggression, as some have claimed. The preponderance of the evidence seems to indicate, however, that the playing of violent video games can be correlated with increases in aggression, including increases in aggressive thoughts, aggressive feeling, aggressive behaviors, and a desensitization to real-life violence, and a decrease in helpful behaviors (Carnagey, Anderson and Bushman, forthcoming; Bartholow, 2005). However, some studies have found no such correlations, and present findings remain controversial. Whatever the precise relation between violent video games and aggression turns out to be, it is clear now that there is a huge difference between the way that children are taught to behave towards others by their parents and how they learn to behave in violent video games. This at least raises the question of how their understanding of and attitude towards violence and aggression is influenced by violent video games. A third hypothesized ill of video games is that they cause individuals to be socially and cognitively slighted and maladjusted. This maladjustment is attributed in part to the neglect of studies and social relations due to an overindulgence into video games and to increased aggression levels from playing violent games. But it is also held to be due to the specific skills and understandings that users gain from video games. Children that play video games are exposed to conceptions of human relations and the workings of the world that have been designed into them by game developers. These conceptions have not been designed to be realistic or pedagogical, and often rely on stereotypes and simplistic modes of interaction and solutions to problems. It is therefore conceivable that children develop ideas and behavioral routines while playing computer games that leave much to be desired. The case in favor of computer games begins with the observation that they are a new and powerful medium that brings users pleasure and excitement, and that allow for new forms of creative expression and new ways of acting out fantasies. Games moreover do not just cause social isolation, they can also stimulate social interaction. Playing multiplayer games is a social activity that involves interactions with other players, and that can even help solitary individuals find new friends. Computer games may moreover induce social learning and train social skills. This is especially true for role-playing games and games that involve verbal interactions with other characters. Such games let players experiment with social behavior in different social settings, and role-playing game can also make users intimately familiar with the point of view and experiences of persons other than themselves. Computer games have moreover been claimed to improve perceptual, cognitive and motor skills, for example by improving hand-eye coordination and improving visual recognition skills (Johnson, 2005; Green and Bavelier, 2003). Computer Games and Values It has long been argued in computer ethics that computer systems and software are not value-neutral but are instead value-laden (Nissenbaum, 1998; Brey, 2000). Computer games are no exception. Computer games may suggest, stimulate, promote or reward certain values while shunning or discouraging others. Computer games are value-laden, first of all, in the way they represent the world. As discussed, earlier, such representations may contain a variety of biases. They may, for example, promote racial and gender stereotypes (Chan, 2005; Ray, 2003), and they may contain implicit, biased assumptions about the abilities, interests or gender of the player. Simulation games like SimCity may suggest all kinds of unproven causal relations, for example correlations between poverty and crime, that may help shape attitudes and feed prejudices. Computer games may also be value-laden in the interactions that they make possible. They may, for example, be designed to make violent action the only solution to problems faced by a player. Computer games can also be value-laden in the storylines they suggest for players and in the feedback and rewards that are given. Some first-person shooters awards extra points, for example, for not killing innocent bystanders, whereas others instead award extra points for killing as many as possible. A popular game like The Sims can serve to illustrate how values are embedded in games. The Sims is a game that simulates the everyday lives and social relationships of ordinary persons. The goal of characters in the game is happiness, which is attained through the satisfaction of needs like Hunger, Comfort, Hygiene and Fun. These needs can be satisfied through success in one’s career, and through consumption and social interaction. As Miguel Sicart (2003) has argued, The Sims thus presents an idealized version of a progressive liberal consumer society in which the goal in life is happiness, gained by being a good worker and consumer. The team-based first-person shooter America’s Army presents another example. This game is offered as a free download by the U.S. government, who uses it to stimulate U.S. army recruitment. The game is designed to give a positive impression of the U.S. army. Players play as servicemen who obey orders and work together to combat terrorists. The game claims to be highly realistic, yet it has been criticized for not showing certain realistic aspects of military life, such as collateral damage, harassment, and gore. The question is how influential computer games actually are in influencing the values of players. The amount of psychological research done of this topic is still limited. However, psychological research on the effect of other media, such as television, has shown that it is very influential in affecting the value of media users, especially children. Since many children are avid consumers of computer games, there are reasons to be concerned about the values projected on them by such games. Children are still involved in a process of social, moral and cognitive development, and computer games seem to have an increasingly large role in this developmental process. Concern about the values embedded in video games therefore seems warranted. On the other hand, computer games are games, and therefore should allow for experimentation, fantasy, and going beyond socially accepted boundaries. The question is how games can support such social and moral freedom without also supporting the development of skewed values in younger players. Players do not just develop values on the basis of the structure of the game itself, they also develop them by interacting with other players. Players communicate messages to each other about game rules and acceptable in-game behavior. They can respond positively or negatively to certain behaviors, and may praise or berate other players. In this way, social interactions in games may become part of the socialization of individuals and influence their values and social beliefs. Some of these values and norms may remain limited to the game itself, for example, norms governing the permissibility of cheating (Kimppa and Bissett, 2005). In some games, however, like massively multiplayer online role-playing games (MMORPG’s), socialization processes are so complex as to resemble real life (Warner and Raiter, 2005), and values learned in such games may be applied to real life as well. Computer Games and Gender Games magazine and game advertisement foster the impression that computer games are a medium for boys and men. Most pictured gamers are male, and many recurring elements in images, such as scantily clad, big-breasted women, big guns and fast cars, seem to be geared toward men. The impression that computer games are mainly a medium for men is further supported by usage statistics. Research has consistently shown that fewer girls and women play computer games than boys and men, and those that do spend less time playing than men. According to research performed by Electronic Arts, a game developer, among teenagers only 40% of girls play computer games, compared to 90% of boys. Moreover, when they reach high school, most girls lose interest, whereas most boys keep playing.6 A study by the UK games trade body, the Entertainment and Leisure Publishers Association, found that in Europe, women gamers make up only a quarter of the gaming population. 7 6 Games industry is 'failing women'. BBC News, August 21, 2006. http://news.bbc.co.uk/2/hi/technology/5271852.stm 7 Chicks and Joysticks. An Exploration of Women and Gaming. ELSPA White Paper, September 2004. www.elspa.com/assets/files/c/chicksandjoysticksanexplorationofwomenandgaming_176.pdf The question whether there is a gender bias in computer games is morally significant because it is a question about gender equality. If it is the case that computer games tend to be designed and marketed for men, then women are at an unfair disadvantage, as they consequently have less opportunity to enjoy computer games and their possible benefits. Among such benefits may be greater computer literacy, an important quality in today’s marketplace. But is the gender gap between usage of computer games really the result of gender bias in the gaming industry, or could it be the case that women are simply less interested in computer games than men, regardless of how games are designed and marketed? Most analysts hold that the gaming industry is largely to blame. They point to the fact that almost all game developers are male, and that there have been few efforts to develop games suitable for women. To appeal to women, it has been suggested, computer games should be less aggressive, because women have been socialized to be non-aggressive (Norris, 2004). It has also been suggested that women have a greater interest in multiplayer games, games with complex characters, games that contain puzzles, and games that are about human relationships. Games should also avoid assumptions that the player is male and avoid stereotypical representations of women. Few existing games contain good role models for women. Studies have found that most female characters in games have unrealistic body images and display stereotypical female behaviors, and that a disproportionate number of them are prostitutes and strippers. 8 Virtual Reality, Simulation and Professional Ethics In discussing issues of professional responsibility in relation to virtual reality systems and computer simulations, a distinction can be made between the responsibility of developers of such systems and that of professional users. Professional users can be claimed to have a responsibility to acquaint themselves with the technology and its potential consequences and to use it in a way that is consistent with the ethics of their profession. The responsibility of developers includes giving consideration to ethical aspects in the design process and engaging in adequate communication about the technology and its effects to potential users. In the development of computer simulations, the accuracy of the simulation and its reliability as a foundation for decision-making in the real world are of paramount importance. The major responsibility of simulation professionals is therefore to avoid misrepresentations where they can and to adequately communicating the limitations of simulations to users (McLeod, 1983). These responsibilities are, indeed, a central ingredient in a recent code of ethics for simulationists adopted by a large number of professional organizations for simulationists (Ören et al., 2002). The responsibility for accuracy entails the responsibility to take proper precautions to ensure that modeling mistakes do not occur, especially when the stakes are high, and to inform users if inaccuracies do or may occur. It also entails the responsibility not to participate in intentional deception of users (e.g., embellishment, dramatization, or censorship). 8 Fair Play: Violence, Gender and Race in Video Games. Children Now , December 2001. 36 pp. http://publications.childrennow.org/ In Brey (1999), I have argued that designers of simulations and virtual environments also have a responsibility to incorporate proper values into their creations. It has been argued earlier that representations and interfaces are not value-free but may contain values and biases. Designers have a responsibility to reflect on the values and biases contained in their creations and to ensure that they do not violate important ethical principles. The responsibility to do this follows from the ethical codes that are in use in different branches of engineering and computer science, especially the principle that professional expertise should be used for the enhancement of human welfare. If technology is to promote human welfare, it should not contain biases and should regard the values and interests of stakeholders or society at large. Taking into account such values and avoiding biases in design cannot be done without a proper methodology. Fortunately, a detailed proposal for such a methodology has recently been made by Batya Friedman and her associates, and has been termed value-sensitive design (Friedman, Kahn and Borning, 2006). Special responsibilities apply to different areas of applications for VR and computer simulations. The use of virtual reality in therapy and psychotherapy, for example, requires special consideration to principles of informed consent and the ethics of experimentation with human subjects (Wiederhold and Wiederhold, 2004). The computer and video game industry can be argued to have a special responsibility to consider the social and cultural impact of their products, given that they are used by a mass audience that includes children. Arguably, game developers should consider the messages that their products send to users, especially children, and should work to ensure that they develop and market content that is age-appropriate and that is more inclusive of all genders. Virtual reality and computer simulation will continue to present new challenges for ethics, because new and more advanced applications are still being developed, and their use is more and more widespread. Moreover, as has been argued, virtual environments can mimic many of the properties of real life, and therefore contain many of the ethical dilemmas found in real life. It is for this reason that they will not just continue to present new ethical challenges for professional developers and users, but also for society at large.

+ نوشته شده در ساعت توسط ... |

Improvement of light quality by ZrO2

Abstract: A novel combination of blue LED chips, transparent glass substrates and phosphors with PDMS thin film is demonstrated. The flip-chip bonding technology is applied to facilitate this design. The ZrO2 nanoparticles are also doped into the PDMS film to increase light scattering. The resultant luminous efficiency shows an 11% enhancement when compared to the regular COG device. The variation of correlated color temperature of such devices is also reduced to 132K. In addition to these changes, the surface temperature is reduced from 121°C to 104°C due to good thermal dissipation brought by ZrO2 nanoparticles.

  

1. Introduction

As the energy resources are dwindling these days, it is important to develop the eco-friendly technologies to promote human being’s living. The white light-emitting diodes (LEDs) have been one of the green technologies to replace the conventional incandescent light bulbs [1–3]. In the past, the vertical contact types of LEDs were the dominant design. However, the poor thermal conductivity and insulating substrate pose difficulties in application [4]. To deal with this issue, flip-chip technology becomes popular in recent years because it can induce high light-extraction efficiency and good heat dissipation [5–7]. In the flip-chip scheme, an increase of output power can be observed due to the reflector at the bottom and direct bonding of the contact pads which can reduce the shadowing effect [8, 9]. Further investigation of thermal resistance and junction temperature of these flip-chip bonded devices revealed that direct metal contact and less sapphire substrate can really help heat dissipation [10].
In order to generate the high efficiency white LED, there were many approaches to optimize the performance of the w- LED. One of the important techniques is the application of the nanoscale structures that could increase the light extraction. Delicate designs, such as the sapphire substrate with the high aspect ratio cone-shape, and nano-patterned air voids between



the GaN nano-pillars or the overgrown GaN layer [11], were fabricated on the pattern sapphire substrate to improve the light extraction. In universal, the conventional LED chips were bonded on the opaque substrate to fabricate the SMD (Surface-Mount-Device) type LED package. This type of package can induce certain amount of photon re-absorption such that the degradation of efficiency can be expected. In order to promote the luminous efficiency, the structure of the chip on glass is the good candidate [12, 13]. The highly transparent glass substrate can open up another chance to collect the back-scattered photons and promote the luminous efficiency of the LED. Many results with similar idea have been reported previously. Chien et al. designed a flip glass substrate package to enhance the color bin yield, and Chang et al. promoted the technology of the flip chip bonding on the ITO [14, 15]. However, to use the glass as the substrate would cause the device operating at higher temperature due to the poor thermal conductivity (σ~1 W/mK), and the low emissivity (the value is about 0.8~0.85) of the glass [16–18]. There were many approaches to resolve this heat dissipation problem such as nanoparticles deployment. In the past, graphene nanoparticles were embedded in the LED package to provide good thermal conduction and extra environment protection [19]. Su et al. also investigated the usage of TiO2-doped silicone layer in the LED package [20]. When it was combined with silicone lens, both light extraction and junction temperature can be improved.

This study generated the high luminous efficiency w-LED by the COG (chip on glass) packaging process. One new material, zirconia nanoparticles, could provide a superior photon scattering ability [21–24]. At the same time, the use of the zirconia particles could promote the heat radiation and reduce the temperature of the device due to the high emissivity (the value is 0.95) [25–27]. In this study, ZrO2 nanoparticles are used with the PDMS film to optimize the performance of the w-LED with the flip chip bonding upon the glass substrate.

2. Experimental methods

This study demonstrates two kinds of host structures by the use of the zirconia particles to yield the white light chip on glass (COG). One is to mix ZrO2 directly with phosphor/PDMS slurry and the other is to mix the ZrO2 with silicone to form a diffusing film and then cover this film on the regular phosphor + LED structure. Figure 1(a) shows the schematic and actual structure photograph of the ZrO2 embedded phosphor PDMS film combining with the COG structure to yield the w-LED. The ZrO2 particles are blended with the silicone to form the transparent PDMS film and then covered on the COG structure as shown in Fig. 1(b). The YAG phosphor film was dispensed on the COG structure and pumped by the blue chips to generate the white light LED as shown in Fig. 1(c).The fabrication procedure is described as below: First, prepare the LED chips with the wavelength 450nm and flip-chip bond them on the sapphire substrate and covered with the 10wt% YAG phosphor film as the reference sample. Second, mix the zirconia nanoparticles in the silicone slurry and the 10wt% YAG phosphor silicone slurry. Then spin-coat the mixture on the glass and dry it later in order to form the zirconia films. Finally, peel the zirconia/phosphor film from the glass and then put the film on the COG structure with the ZrO2 of 1wt%, 5wt% and 9wt% to as the Sample1, Sample2, and Sample3, respectively. In the other hand, Sample4, Sample5, and Sample6 are covered by the transparent 1wt%, 5wt% and 9wt% ZrO2 film on the w-LED device (10wt% phosphor film covered on the COG structure), respectively. Table 1 shows the PDMS films with the different doping concentrations of ZrO2 

Fig. 1. (a) The cross section of the w-LED combines with theZrO2 (wt%) 0 1 5 9 1 5 9

Phosphor (wt%) 10 10 10 10 0 0 0

The finished ZrO2 layer and their mixture with phosphors were inspected by SEM. Figure 2(a) and 2(b) show the SEM cross section images of the ZrO2 PDMS film and the YAG phosphor/ZrO2 film. In these images, the thickness of the transparently ZrO2 PDMS film is about 260.2µm, the YAG phosphor/ZrO2 film is about 290µm. Figure 2(c) shows that the PDMS film used in this experiment with a particle size is average about 300nm.

Fig. 2. The cross section view of the (a) ZrO2 film, (b) the phosphor and ZrO2 conjunction film, and the (c) ZrO2 nano-particles in the silicone.

3. Results and discussion

Figure 3(a) shows the spectrum for the reference sample, and other samples with different concentrations of ZrO2 and phosphors at 80mA of driving current. Furthermore, the CCT and the luminous efficiency are shown in Table 2. In general, the samples with zirconia particles and phosphors perform better than the reference. If only zirconia particles are presented, proper concentration is needed to enhance the luminous efficiency. The phenomena are caused by the high scatting ability of the ZrO2 nanoparticles and the improvement results from the better utilization of the blue photons from the LEDs [24]. In order to receive highest lumen efficiency, the best ratio of the ZrO2 blended phosphor film and the pure ZrO2 layer is about 1wt% and 5wt%, respectively. However, as the ratio of the zirconia nanoparticles increasing above 5wt%, it can cause lower luminous efficiency because high concentration of


the ZrO2 would led the extra light trapping and re-absorption of the emitted photons and reduce the transmittance of the layer [24, 25]. Figure 3(b) shows the COG structure w-LED sample with and without ZrO2 layer after 10 minute driving at 80mA. The lumen efficiency of all the COG w-LED samples would decay after driving the devices after 10 minutes. The phenomenon is caused by the poor thermal conductivity and the low emissivity. Due to this low thermal conductivity of the glass substrate, after 10 minutes of electrical driving, the heat dissipation is poor and the package temperature rises accordingly. The lumen efficiency and the CCT of the reference sample and Sample1 to Sample6 after the 10 minutes driving are listed in Table 3. As shown in Table 3, the 5wt% ZrO2 blended phosphor layer with the excellent performance after the time passes. In addition to the spectral intensity change, the efficiency (lumen/watt) of the w-LEDs can be modified after 10 minutes of 80mA continuous arriving. From Fig. 3(c), all samples show some degradation on efficiency, but the reference ones (without any ZrO2) degrade most, and the amount of degradation drops as the weight percent of ZrO 2 increases. We believe the high emissivity of ZrO2 particles (0.95) plays an important role on this phenomenon because they can effectively reduce the internal temperature of the package.

Table 2. The luminous efficiency and CCT for the reference and zirconia layer combined sample

Ref. Sample1 Sample2 Sample3 Sample4 Sample5 Sample6
CCT(K) 4960.1 4736.6 4912 4737 4883 4911.3 4956
Lumen/watt 176.7 196.8 194.4 189.8 188.9 196.1 174.1

Fig. 3. (a) The emission spectra for the reference sample, ZrO 2 embedded phosphor film covered samples and the reference sample covered by the ZrO2 film at 80mA and (b) the emission spectra after 10 minutes driving. (c) The deviation of the lumen efficiency between the original samples and after 10minutes driving samples.

Table 3. The luminous efficiency and CCT for the reference and zirconia layer combined sample after 10 minutes driving

Ref. Sample1 Sample2 Sample3 Sample4 Sample5 Sample6
CCT(K) 6364.5 5010.1 4859 4807.7 5421.8 5042 6275.1
Lumen/watt 130.4 170.2 181.8 178.3 153.1 165 163

To verify this conjecture, we need to check the actual temperature of these units. Figure 4(a)-4(d) shows the w-LED with ZrO2 nanoparticles in operation the surface temperature mapping illustrations. According to the thermal images, the white area represents the maximum surface temperature plotted in the Fig. 4(e) and 4(f). The surface temperatures shown in the figures were measured after 10 minutes of continuous electrical injection with different configurations, the devices demonstrated various temperatures. The temperature differences between the two groups (Sample 1-3 vs Sample 4-6) rise from the different layout in the COG package. In Sample 1-3, there are two phosphor films with ZrO2 embedded. However, for Sample 4-6, there is one additional ZrO2/silicone layer because the phosphor and ZrO2 layers are separated. This additional layer can increase the total temperature by 5°C

due to poorer heat dissipation in the device. As these results showed, the combination of the ZrO2 nanoparticles embedded layer can reduce the surface temperature of the w-LED and resolve the poor heat dissipation of the sapphire for the COG structure effectively.

To understand the possible explanation on the measured surface temperature results, the correlation between the packaged material and heat transfer needs to be clarified. It is true that the higher concentration of ZrO2 can lead to more photon trapping and thus potentially more re-absorption and more temperature rising. In a general LED scheme, about 20% of input power turns into optical energy and 80% becomes heat [28]. These 20% of photons will have about 30% of them becomes stray photons and possibly get re-absorbed in the package. So if we just consider the source of this trapped or re-absorbed photons in terms of total input power, 6% at most of extra heat (20% × 30%) can be generated due to this re-absorption. Comparing to the original heat directly from LED chip, this is a far smaller portion. On the other hand, the high emissivity of the ZrO2 nanoparticles helps the film to relieve from this extra heat generation. From the general theory of heat transport provided in the previous question, the radiation and convection parts are significant when the film to the environment temperature difference is high. Although the number of trapped photons is high, the presence of ZrO2 can radiate more heat out and thus lower the overall temperature of the package.

Fig. 4. (a)-(d) The surface temperature mapping of the COG structure w-LED combine with the zirconia after the 10 minute driving. The surface temperature diversification of (e) the ZrO2 nanoparticles with different ratio doping in the phosphor film w-LED samples and (f) the ZrO2 layer covered on the top of the w-LED samples after the 10 minute driving.

Figure 5 shows the CIE 1931 coordinates of the initial and after 10 minutes of continuous driving. The effect of this 10-minute driving in the reference samples can be observed clearly as the CIE coordinates change before and after, which is not desirable for solid state lighting. On the other hand, all other samples show very little change in the CIE 1931 coordinates, which is a good sign and also can be viewed as another indicator of improvement brought by ZrO2. 

Fig. 5. The variation of the CIE1931 coordinates between the original samples and after 10minutes driving samples

Table 4 shows that the thermal conductivity of the phosphor films changed with the application of ZrO2 nanoparticles. In this table, the thermal conductivity increases as the concentration of the ZrO2 increases.

Table 4. The thermal conductivity of the PDMS film with the different concentrates

ZrO2 concentration 0wt% 1wt% 5wt% 9wt%
Thermal conductivity(W/ m2K) 0.528 0.735 0.739 0.821

To consider the full thermal effect in the mixed film, we must include different components in heat transport: the radiation, convection and conduction terms. The effect caused by the thermal conductivity of the phosphor films is very weak in the total heat transfer rate of the LED. The thermal properties between the phosphor/ZrO2 films and the w-LED can be described by the Eqs. (1)-(4) are calculated using as below [29]:

Q cond = kA  Ts − Ta  d−1 (1)
Qrad = εσ A  Ts  4 −  Ta 4  (2)
Qconv = HA  Ts − Ta  , H = εσ A  Ts + Ta   Ts  4 +  Ta 4  (3)
Qtotal = Qconv + Qrad + Qcond (4)
= εσ A  Ts + Ta    Ts  4 +  Ta  4  + εσ A   Ts  4 −  Ta  4  + kA  Ts − Ta  d−1
In these formulas, Qcond, Qrad and Qconv are defined as the thermal conduction, radiation and convection heat transfer rate for the exterior surface of w-LED, respectively. The total heat transfer rate formula of the w-LEDs is calculated as shown in Eq. (4). Where k is the thermal conductivity, A is the surface area, d is the thickness, ɛ is the emissivity, σ is Stefan-Boltzmann constant, H is the convection heat transfer coefficient, Ts and Ta is the surface temperature and surround temperature of the device, respectively. By the formula, the main

effect for the total heat transfer rate is the combination of the Qconv and Qrad because the influencing factor of the sum for the value: ɛσA (Ts + Ta) [(Ts)4 + (Ta)4] and ɛσA [(Ts)4 - (Ta)4]

is much larger than the factor of conduction heat transfer rate: kA (Ts - Ta)d−1. As the result, the radiation and convection terms are dominant in heat transfer and the emissivity ( ) is more



important than the thermal conductivity. Figure 1 shows the simulation results of the COG structure covered with the different emissivity materials and refers from the Eqs. (1)-(4) by the Flow simulation software. All the factors of the total thermal properties have been considered, such as the surface area, thermal conductivity, and emissivity.
To verify the assumption of emissivity of ZrO2 on the package temperature, we set up a numerical environment to simulate the outcome. Figure 6 shows the simulation results of the COG structure covered with the different emissivity materials refers from the Eqs. (1)-(4) by the Flow simulation software. The simulation is executed based on the structure in Fig. 6(a), covered with the thin film mixing with the different concentration (0wt%, 49wt%, 99wt%) of ZrO2 particles to produce the silicone thin films with the different emissivity(0.7, 0.85, 0.95), respectively. Figure 6(b)-6(d) shows the thermal image of the COG structure covered by the thin film and the temperature of the chip is about 96.7°C (emissivity 0.7), 94.4°C(emissivity 0.85), and 92.8°C(emissivity 0.9), respectively. From our calculation, the surface temperature reduces significantly when the emissivity of the composite film increases and other parameters are kept the same. 

 

Fig. 6. The modulation emissivity simulation result of the COG structure w-LEDs.

The color uniformity of the w-LED can be evaluated by the deviation of correlated color temperature ( CCT) which is defined as the difference between the CCT(max) and the CCT(min) [24] Beside the nice heat dissipation ability, the ZrO2 nanoparticles are also known for the excellent scattering capability which can improve the uniformity of the CCT [30]. As shown in Fig. 7, the diagram shows the Sample1 to Sample6 with angular distribution of CCT at the current of 80 mA. The CCT deviations for the reference sample, the phosphor and ZrO2 nanoparticles blended PDMS COG w- LEDs (Sample1 to Sample3) is about 420K, 336K and 162K, and the other w-LED samples with the additional the transparent ZrO2 PDMS layer (Sample4 to Sample6) is about 833K, 721K and 434K, respectively. As these results, the use of ZrO2 nanoparticles embedded PDMS film can improve the color uniformity of the w-LED, and the phosphor films doped by ZrO2 nanoparticles are perform better than the samples with the additional transparent ZrO2 PDMS layer. The reason is that the zirconia particles in the phosphor film have a smoother angular -dependent CCT distribution than the ZrO2 nanoparticles embedded in the PDMS film on the surface of the phosphor layer, because which can provide an effective scattering to the excite the around phosphor particles. The haze intensity image permit the doped phosphor films are with the excellent diffuse ability than the covered of the additional transparent ZrO2 PDMS layers on the phosphor films as


shown in Fig. 7(b) However, that would influence the CCT deviations when the concentrate of the ZrO2 nanoparticles increase more than 9%, the best ratio is about 5%.

Fig. 7. (a) The spectrum of angular-dependent CCT at 80mA and (b) the haze intensity of the reference sample and Sample1 to Sample6.

4. Conclusion

This study demonstrates a highly efficient design of w-LED with COG structure and ZrO2 PDMS films. The w-LED can obtain the least CCT deviations (132K) in the case of the 5wt% ZrO2 blended phosphor film and the use of the ZrO 2 film can improve the color uniformity of the w-LED effectively. The phosphor film blended of the 1wt% zirconia particles can improve the luminous efficiency of the w-LED and achieve a 11% enhancement than the COG w-LED without the ZrO2 film covered. The ZrO2 blended phosphor film and pure ZrO2 film can reduce the surface temperature of the COG w-LEDs from 120°C to the 110°C and the ZrO2 blended phosphor film is more effective than the pure ZrO 2 film in terms of thermal dissipation. The high emissivity of ZrO2 can help on this heat dissipation and this is verified via simulation and experiments. In conclusion, it is very useful to resolve the heat dissipation problem and improve the lumen efficiency by the combination of the ZrO2 PDMS films with the COG structure w-LEDsin this case high power LED.s and SMds works better 

Good luck..s.s

+ نوشته شده در ساعت توسط ... |

3Gsecure

Security for the Third Generation (3G) Mobile System Colin Blanchard Network Systems & Security Technologies BTexaCT. Access and use of service to avoid or reduce a legitimate charge. · Loss of confidentiality or integrity of a user’s or operator’s data · Denial of a specific user’s access to their service or denial of access by all users to a service However, user expectations for instant communication and ease of use, as well as terminals which are easily lost or stolen, present a number of unique challenges in the mobile environment. The original first generation analogue mobile employed a simple electronic serial number to confirm that the terminal should be allowed access to the service. It was not long before the protection afforded to this number was broken. Eventually, devices appeared that could read these electronic serial numbers from the air, and access an unsuspecting user’s account for a short time, before moving on to the next, in the hope that the small charges on each bill would not be noticed. So why was this not predicted at the time? Unfortunately, there always seems to be an assumption, with any new development in communications technology, that complexity alone will protect such services from abuse. Second generation systems such as GSM were designed from the beginning with security in mind. This has stood up to the kind of attacks that were prevalent on the analogue system at the time, thanks mainly to the ability to put responsibility for security in the hands of the Home Environment (HE) operator. The HE operator can control the use of the system by the provision of the Subscriber Identity Module (SIM) which contains a user identity and authentication key. This is specifically arranged so that this long life authentication key is not required by the Serving Network (SN) when roaming, exposed over the air or exposed across the interface between the SIM and the mobile. This keeps to the minimum the level of trust the HE operator needs to place in the User, Serving Network and manufacturer of the Mobile Equipment (ME). In 1996, when the 3rd Generation system known as UMTS was being developed in ETSI (European Telecommunications Standards Institute), the opportunity was taken to review the basis for security in existing mobile systems and to develop a new security architecture specifically to be used in UMTS. This early work was subsequently taken forward into the Third Generation Partnership Project (3GPP) and this will be the basis for the Release 99 deployment of 3G systems.

+ نوشته شده در ساعت توسط ... |

Overview of GPRS and UMTS

GPRS and UMTS are evolutions of the global system for mobile communication (GSM) networks. GSM is a digital cellular technology that is used worldwide, predominantly in Europe and Asia. GSM is the world’s leading standard in digital wireless communications. GPRS is a 2.5G mobile communications technology that enables mobile wireless service providers to offer their mobile subscribers packet-based data services over GSM networks. Common applications of GPRS include the following: Internet access, intranet/corporate access, instant messaging, and mutlimedia messaging. GPRS was standardized by the European Telecommunications Standards Institute (ETSI), but today is standardized by the Third Generation Partnership Program (3GPP). UMTS is a 3G mobile communications technology that provides wideband code division multiple access (CDMA) radio technology. The CDMA technology offers higher throughput, real-time services, and end-to-end quality of service (QoS), and delivers pictures, graphics, video communications, and other multimedia information as well as voice and data to mobile wireless subscribers. UMTS is standardized by the 3GPP. The GPRS/UMTS packet core comprises two major network elements: • Gateway GPRS support node (GGSN)—a gateway that provides mobile cell phone users access to a public data network (PDN) or specified private IP networks. The GGSN function is implemented via Cisco IOS software on the Cisco 7200 series router or on the Cisco Multi-Processor WAN Application Module (MWAM) installed in a Catalyst 6500 series switch or Cisco 7600 series Internet router. Cisco IOS GGSN Release 4.0 and later provides both the 2.5G GPRS and 3G UMTS GGSN functions.

Serving GPRS support node (SGSN)—connects the radio access network (RAN) to the GPRS/UMTS core and tunnels user sessions to the GGSN. The SGSN sends data to and receives data from mobile stations, and maintains information about the location of a mobile station (MS). The SGSN communicates directly with the MS and the GGSN. SGSN support is available from Cisco partners or other vendors.

In a 2.5G environment, the RAN is composed of mobile stations that connect to a base transceiver station (BTS) that connects to a base station controller (BSC). In a 3G environment, the RAN is made up of mobile stations that connect to NodeB, which connects to a radio network controller (RNC). The RAN connects to the GPRS/UMTS core through an SGSN, which tunnels user sessions to a GGSN that acts as a gateway to the services networks (for example, the Internet and intranet). The connection between the SGSN and the GGSN is enabled through a tunneling protocol called the GPRS tunneling protocol (GTP): GTP Version 0 (GTP V0) for 2.5G applications, and GTP Version 1 (GTP V1) for 3G applications. GTP is carried over IP. Multiple SGSNs and GGSNs within a network are referred to collectively as GPRS support nodes (GSNs).

To assign mobile sessions an IP address, the GGSN uses the Dynamic Host Configuration Protocol (DHCP), Remote Authentication Dial-In User Service (RADIUS) server, or a local address pool defined specified for an access point configured on the GGSN. The GGSN can use a RADIUS server to authorize and authenticate remote users. DHCP and RADIUS services can be specified either at the global configuration level or for each access point configured on the GGSN. In Cisco IOS Release 12.1(5)T and later, the GGSN on the Cisco 7200 series router (with an Integrated Services Adapter [ISA] card) supports IP Security (IPSec) protocol to provide data confidentiality, data integrity, and data authentication between participating peers. On the Cisco MWAM installed in a Catalyst 6500 series switch / Cisco 7600 series Internet router platform, IPSec encryption is performed on the IPSec Virtual Private Network (VPN) Acceleration Services Module. GPRS Interface Reference Model The 2.5G GPRS and 3G UMTS standards use the term interface to label (or identify) the communication path between different network elements. The GPRS/UMTS standards define the requirements and characteristics of communication between different GPRS/UMTS network elements over these interfaces. These interfaces are commonly referred to in descriptions of GPRS/UMTS networks. Figure 1-3 shows the interfaces that are implemented in the Cisco IOS GGSN feature: • Gn interface—Interface between GSNs within the same public land mobile network (PLMN) in a GPRS/UMTS network. GTP is a protocol defined on the Gn interface between GSNs in a GPRS/UMTS network. • Gi interface—Reference point between a GPRS/UMTS network and an external packet data network. • Ga interface—Interface between a GGSN and charging gateway (CG) in a GPRS/UMTS network.

Virtual Template Interface To facilitate configuration of connections between the GGSN and SGSN, and the GGSN and PDNs, the Cisco IOS GGSN software uses an internal interface called a virtual template interface. A virtual template is a logical interface that is not tied directly to a specific interface, but that can be associated dynamically with a interface. As with a physical interface on a router, you can assign an IP address to the virtual template interface. You can also configure IP routing characteristics on the virtual template interface. You are required to configure certain GPRS/UMTS-specific elements on the virtual template interface, such as GTP encapsulation (which is necessary for communicating with the SGSN) and the access list that the GGSN uses to determine which PDNs are accessible on the network.

Access Points The GPRS/UMTS standards define a network identity called an access point name (APN). An APN identifies the service or network to which a user can connect from a GGSN in a GPRS/UMTS network. To configure APNs, the Cisco IOS GGSN software uses the following configuration elements: • Access point—Defines an APN and its associated access characteristics, including security and method of dynamic addressing. • Access point list—Logical interface that is associated with the virtual template of the GGSN. The access-point list contains one or more access points. • Access group—An additional level of security that is configured at an access point to control access to and from a PDN. When an MS is permitted access to the GGSN as defined by a traditional IP access list, the IP access group further defines whether access is permitted to the PDN (at the access point). The IP access group configuration can also define whether access from a PDN to an MS is permitted. For more detailed information on access-point configuration, refer to the “Configuring Access Points on the GGSN” section on page 1-10. Benefits The 2.5G GPRS technology provides the following benefits: • Enables the use of a packet-based air interface over the existing circuit-switched GSM network, which allows greater efficiency in the radio spectrum because the radio bandwidth is used only when packets are sent or received • Supports minimal upgrades to the existing GSM network infrastructure for network service providers who want to add GPRS services on top of GSM, which is currently widely deployed • Supports enhanced data rates in comparison to the traditional circuit-switched GSM data service • Supports larger message lengths than Short Message Service (SMS) • Supports a wide range of access to data networks and services, including VPN/Internet service provider (ISP) corporate site access and Wireless Application Protocol (WAP). In addition to the above, the 3G UMTS technology includes the following: • Enhanced data rates of approximately – 144 kbps—Satellite and rural outdoor – 384 kbps—Urban outdoor – 2048 kbps—Indoor and low-range outdoor • Supports connection-oriented Radio Access Bearers with specified QoS, enabling end-to-end QoS

 

GGSN Release 5.0 and later is a fully-compliant 2.5G and 3G GGSN that provides the following features: • Release 99 (R99), Release 98 (R98) and Release 97 (R97) support and compliance • GTPv0 and GTPv1 messaging • IP Packet Data Protocol (PDP) and PPP PDP types • Cisco Express Forwarding (CEF) switching for GTPv0 and GTPv1, and for IP and PPP PDP types • Support of secondary PDP contexts for GTPv1 (up to 11) • Virtual APN • VRF support per APN • Multiple APNs per VRF • VPN support – Generic routing encapsulation (GRE) tunneling – Layer 2 Tunneling Protocol (L2TP) extension for PPP PDP type – PPP Regeneration for IP PDP type – 802.1Q virtual LANs (VLANs) • Security features – Duplicate IP address protection – PLMN range checking – Blocking of Foreign Mobiles – Anti-spoofing – Mobile-to-mobile redirection • Quality of service (QoS) – Support of UMTS classes and interworking with differentiated services (DiffServ) – Delay QoS – Canonical QoS – GPRS QoS (R97/R98) conversion to UMTS QoS (R99) and the reverse – Call Admission Control – Per-PDP policing • Dynamic address allocation – External DHCP server – External RADIUS server – Local pools • Per-APN statistics • Anonymous access • RADIUS authentication and accounting • Accounting – Wait accounting – Per-PDP accounting.– Authentication and accounting using RADIUS server groups mapped to APNs – 3GPP vendor-specific attributes (VSAs) for IP PDP type – Transparent mode accounting – Class attribute – Interim updates – Session idle timer – Packet of Disconnect (PoD) • Dynamic Echo Timer • GGSN interworking between 2.5G and 3G SGSNs with registration authority (RA) update from – 2.5G to 2.5G SGSN – 2.5G to 3G SGSN – 3G to 3G SGSN – 3G to 2.5G SGSN • Charging – Time trigger – Charging profiles – Tertiary charging gateway – Switchback to primary charging gateway • Maintenance mode • Multiple trusted PLMN IDs • GGSN-IOS SLB messaging • Session timeout

+ نوشته شده در ساعت توسط ... |

آیا دیتا وزن دارد؟ weight of data
اطلاعات چقدر وزن دارد؟ بيشتر افراد ميدانند كه كامپيوتر در بردارنده تمام انواع اطلاعات است، از ايميل و اسناد گرفته تا ويدئوكليپ، صفحات وب و هر چيز ديگر. همه اين اطلاعات به صورت رشته*اي از ارقام باينري (صفر و يك) نشان داده ميشود. اين ارقام نه تنها ماهيت رياضي دارند، بلكه چيزهاي ملموسي نيز هستند. آنها به صورت ولتاژ در مدارهاي الكترونيكي قرار دارند.
بنابراين هر جزء (bit) اطلاعات بايد وزني (هر چند بسيار ناچيز) داشته باشد. اكنون مي*خواهيم اين پرسش را مطرح كنيم: وزن اطلاعاتي كه در يك روز معمولي از طريق اينترنت ارسال ميشود چقدر است؟
در پاسخ به اين سؤال، ما پايگاه*هاي اطلاعاتي فني را جستوجو كرديم، كتابهاي مرجع را ورق زديم، به سايت گوگل سر زديم و با كارشناسان صحبت كرديم. به زودي معلوم شد كه اگر جواب مشخصي را بخواهيم، بايد خودمان آن را محاسبه كنيم، زيرا بهنظر ميرسد كه هيچ كس ديگري قبلا اين پرسش را مطرح نكرده است.
كليد حل معماي وزن اينترنت در فهم فرآيند اساسي كه تمام اين اطلاعات از مجراي آن عبور مي*كند، قرار دارد؛ خواه درباره* اي ميلي كه از آن طرف خيابان فرستاده شده است بحث كنيد، يا از تصويري كه از يك web cam از آن سوي دنيا ارسال شده است.
براي آنكه سفر در اينترنت به آساني انجام شود، اطلاعات به بسته هاي كوچكي به نام packet تقسيم ميشود. اين بسته ها شامل اجزاي كوچكي از اطلاعات است كه از چند ده تا يك هزار بايت حجم دارد. علاوه بر اطلاعاتي كه بايد ارسال شود، هر پاكت داراي جزئيات آدرس دهي به مسيرهاي لازم نيز هست.
بدون در نظر گرفتن اين كه پاكت به كجا فرستاده ميشود يا نوع تجهيزاتي كه پاكت از آن عبور مي*كند چيست، يك چرخه اساسي بارها تكرار ميشود تا پاكت به مقصد برسد.
پيام در حافظه يك رايانه ذخيره مي*شود، سپس مورد تجزيه و تحليل قرار مي*گيرد تا مشخص شود كه به كجا بايد ارسال شود، به نحوي براي ارسال شدن كد گذاري مي*شود(يا به صورت الكترون*هايي در يك كابلethernet يا به شكل فوتون*هايي كه از يك كارتwi-fi تشعشع مي*شود) آنگاه به رايانه بعدي موجود در زنجيره فرستاده شده، كد گذاري شده و در حافظه همان رايانه ذخيره مي* شود.
سيكل مذكور تا هر اندازه كه لازم باشد تكرار مي*شود. چيزي كه در اين جا اهميت دارد، خود الكترون*ها يا امواج راديويي كه از رايانه شما فرستاده مي*شود نيست، بلكه الگوي بيت هايي است كه اين امواج آن را شرح مي*دهد.
الكترون*ها يا امواجي كه مستقيما از رايانه شما ارسال مي*شود معمولا زياد دور نمي*روند ( حداكثر چند صد فوت) تا به رايانه ديگري برسند.
حتي هنگامي كه شما پاكت*هايي را به صورت پالس*هاي نور و از طريق كابل*هاي فيبر نوري به هزاران مايل دورتر مي*فرستيد، دستگاه*هاي تكرار*كننده (repeater) كه در فاصله هر۲٠ مايل و در سطح دريا قرار داده شده است، فوتون*هاي دريافتي را جذب كرده و فوتون*هاي جديدي را به تكرار*كننده بعدي ارسال مي*كند.
به عبارت ديگر، اجزاي فيزيكي كه دراينترنت حركت مي*كنند، هرگز به جاي دوري نمي*روند، آنچه واقعا به دور دست مي*رود (چيزي كه حامل وزن است) الگوي بيت نشان دهنده هر پاكت است كه دائماً در حافظه الكترونيكي هر سامانه در هنگام عبور اطلاعات از شبكه بازسازي مي*شود. يك راه درك اين موضوع آن است كه تصور كنيد من خودرويي دارم كه شما مايل به داشتن آن هستيد.
يك فرض عجيب ديگر آن كه شما در جزيره*اي زندگي مي*كنيد كه از طريق هوايي يا دريايي كاملا غيرقابل دسترس است.
بنابراين من نمي*توانم خودروي خودم را مستقيما براي شما بفرستم. خوشبختانه در جزيره محل سكونت شما يك كارگاه مدرن و مجهز و انبار بزرگي از قطعات خودرو وجود دارد..
بنابراين براي آن كه من ماشين خود را براي شما ارسال كنم، آن را به تفصيل بررسي كرده و نقشه كامل آن را ترسيم و براي شما فكس مي*كنم.
آنگاه شما مي*توانيد خودروي من را از روي نقشه*هاي تفصيلي آن مونتاژ كنيد. اكنون شما خودروي جديدي داريد كه مي*توانيد سوار آن شده و در جزيره خود رانندگي كنيد. خودرويي كه كاملا حقيقي است و وزن دارد. 
اگر بتوانيم وزن بيت*هاي مربوط به يك قطعه اطلاعاتي را هنگامي كه روي حافظه يك رايانه قرار مي*گيرد، اندازه گيري كنيم، آنگاه نصف راه محاسبه وزن اينترنت را طي كرده*ايم.
براي تخمين وزن اينترنت به يك پيش زمينه فني احتياج است، داخل حافظه يك رايانه معمولي، چيزي كه به ياد مي*آورد يك بيت مفروض بايد صفر باشد يا يك، خازن نام دارد و اين قطعه روي تراشه است كه مي*تواند مقدار كمي بار الكتريكي را در خود نگه دارد.
اگر خازن يك سلول را شارژ كنيد، آن بيانگر «يك» منطقي است. 
خازن شارژنشده نشان دهنده «صفر» منطقي است. خازن*هاي حافظه به قدري كوچك هستند كه براي شارژ شدن هر يك از آن*ها فقط به ۴۰هزار الكترون نياز است، كه واقعا مقدار بسيار كوچكي است.
در مقام مقايسه بايد بدانيد كه از يك حباب لامپ 100 واتي در هر ثانيه حدود ۱۰۱۸×۷ / ۵ الكترون عبور مي*كند!
اكنون بياييد نگاهي به يك اي ميل معمولي داشته باشيم، يك اي ميل تقريبا ۵۰ كيلو بايت محتوا دارد. از آن جا كه در هر بايت ۸ بيت وجود دارد و هر كيلو بايت از ۱٠٢۴ بايت تشكيل مي*شود، لذا يك ايميل شامل تقريبا ۴۱٠هزار بيت است. 
البته همه اين بيت*ها يك نيستند، چون درآن صورت اي ميل خسته كننده*اي خواهد شد! به*طور ميانگين حدود نيمي از اين بيت*ها يك و نيمي ديگر صفر است.
بنابراين تقريبا ٢٠۵هزار يك بايد در حافظه رايانه ذخيره شود، كه مستلزم وجود حدودا ۸ ميليارد الكترون است. وزن هر الكترون ۳٠-۱٠×٢ پوند است، بنابراين يك ايميل با حجم كيلو بايت تقريبا(۱۹-۱٠×٢) اونس وزن دارد كه به اندازه وزن٢۱ هزار اتم سرب است.
اين عدد ممكن است خيلي بزرگ به*نظر برسد، ولي در حقيقت مقدار بسيار كوچكي است. 
هر اونس سرب در بردارنده حدود ۸٢ميليون كاتريليون(۱٠٢٢×٢/۸) اتم است!
آنچه ذكر شد فقط يك ايميل بود.
در كل چقدر اطلاعات( تمام صفحات وب، پيام*هاي فوري، جريان*هاي تصويري و هر چيز ديگر كه بتوان تصور كرد) در اينترنت جريان دارد؟ قطعا يافتن مقدار دقيق آن كار دشواري است، ولي نهايتا ما پاسخ خود را با نگاهي به فعاليت اتصال كاربران نهايي (از قبيل خطوط مودمdial up، اتصال به وسيله فيبرنوري وdsl) محاسبه مي*كنيم.
اتصال باند پهن به منازل و بنگاه*هاي تجاري، مانند ADSL و مودم*هاي كابلي بيشتر بار اينترنت را توليد مي*كند.
در حال حاضر۷۵ درصد كل ترافيك اينترنت به*دليل به اشتراك گذاشتن فايل بين كاربران است، ضمن آن كه۵۹ درصد اشتراك فايل*ها مربوط به افرادي است كه فايل*هاي ويدئويي خود را با هم مبادله مي*كنند.
اشتراك فايل*هاي موسيقي نيز حدود ۳۳ درصد ترافيك را تشكيل مي*دهد.طبق تخمين فقط ۹ درصد از كل ترافيك مربوط به اي ميل است. اين رقم كل حدود۴٠ پتا بايت
( يا۱٠۱۶× ۴بايت) است.
با در نظر گرفتن اين عدد و قرار دادن آن در همان فرمولي كه ما براي اي ميل۵٠ كيلو بايتي به دست آورديم، مقدار ۵ ميلياردم كيلوگرم حاصل مي*شود.
سرانجام پس از مقدار زيادي محاسبه و تندنويسي توانستيم پاسخ را پيدا كنيم: وزن محتواي اينترنت تقريبا ٢/٠ميليونيم يك اونس (حدود ۷/۵ ميكروگرم) است.
نامه*هاي عاشقانه، قرارداد تجاري، اسپم(پيام*هاي حامل ويروس)، دادخواست*ها، خبرنامه*ها، نمايش*هاي تلويزيوني، مقالات خبري، برنامه*هاي تعطيلات، صفحات معروف وب، موسيقي، پيام*هاي تبريك و تسليت و هر چيز ديگري كه در زندگي انسان وجود دارد به صورت صفر و يك در فضاي اينترنت كد گذاري مي*شود.
اگر همه آن*ها را با هم جمع كنيم، وزن كل آن*ها تقريبا به اندازه كوچك*ترين دانه شن روي زمين است، دانه*اي كه قطر آن تنها به اندازه٢ هزارم اينچ (تقريبا٠۵/٠ميلي متر) باشد!

 

 

 

و اما تحقیقات شخصی اینجانب (دی کد)در آزمایشگاه مجهز خودم به نتیجه ای دیگر رسیدم که دیدن نتیجه خالی از لطف نیست... 

یک هارد یک ترابایتی پر از دیتا حدود 0.008 میکرو گرم وزن دیتا دارد و با نبمی از ظرفیت که 500GB می باشد 0.003 میکرو گرم وزن دیتای آن میباشد البته در حالت فرگ... در حالات دیگر نیز بررسی کردم که جزو تحقیقات اصلی میباشد و برای دریافت مقاله ی کامل باید آن را خریداری کنید.

با بررسی دقیق این فرایند میتوان راه کار های نوینی برای **** و****و****و... داد که در مقاله ی اصلی به تفصیل به آن پرداخته شده.


برچسب‌ها: weight of data dataweight data electronic nano na
+ نوشته شده در ساعت توسط ... |

نانوالکترونیک


در سال ۱۹۵۶ گوردون مور بنیان‌گذار اینتل تحلیلی ارایه کرد که بر طبق آن هر ۱۸ ماه تعداد ترانزیستورهای بکار رفته در ریزپردازهای اینتل دو برابر می شود که نصف شدن ابعاد گیت ترانزیستورها با شرط ثابت بودن اندازه تراشه سیلیکونی در آن می‌تواند نتیجه این قوانین باشد. این قاعده به قانون مور موسوم شد. این نصف شدن در واقع پیام‌آور ابعاد اقتصادی بود یعنی هر چه گیت کوچکتر می‌شد ترانزیستور می‌توانست سریعتر سوئیچ کند و درنتیجه انرژی کمتری مصرف می‌شد بنابراین سایز ترانزیستورها هر 3 سال به طور متوسط 0.7 برابر کوچکتر شده است اما به دلیل قوانین مکانیک کوانتوم محدودیت تکنیک های ساخت ممکن است . از کاهش بیش از این از لحاظ اندازه در ترازیستورهای FET معمولی جلوگیری شود و در یکی دو دهه آینده با روش های متداول ساخت در ابعاد زیر 50 نانومتر متوقف شود به این ترتیب کوچک سازی عناصر مدارها تا به حد نانومتری حتی در اندازه مولکولی محققان را به سمتی سوق می دهد که در جهت افزایش قدرت و کارایی ترانزیستور ها خیلی بیشتر از حالت معمولی فعالیت می کنند دستگاههای نانومتری جدید می توانند در دو حالت سوییچ و آمپلی فایر ایفای نقش می کنند با وجود این بر عکس FET های امروزی که عمل آنها بر اساس جابه جایی اجرام الکترونها در حجم ماده می باشند دستگاههای جدید بر اساس پدیده مکانیکی کوانتومی عمل می کنند و در اندازه نانومتری ظاهر می شوند و تعداد بیشتری ترانزیستور در یک تراشه سیلیکون جای می گیرند . افزایش تعداد ترانزیستورها و بازدهی آنها، هزینه را کاهش می‌دهد بنابراین مقرون به صرفه‌ این بود که هر ترانزیستور تا حد امکان کوچکتر شود، این کوچک‌سازی بالاخره در نقطه‌ای متوقف می‌شد بنابراین برای ادامه رشد صنعت الکترونیک باید به فکر فناوریهای جایگزین بود،فناوری که مشکلات گذشته را حل کرده و توجیه اقتصادی داشته باشد و این بار نانو تکنولوژی بود که توانست به کمک الکترونیک بیاید و فناوری الکترونیک مولکولی یا همان نانو‌الکترونیک بنا نهاده شد.


فن آوری نانوالکترونیک نقطه همگرایی علوم مختلف در آینده است. در این میان یکی از پرکاربردترین شاخه ها نانو الکترونیک می باشد. امروزه افزایش ظرفیت ذخیره داده، افزایش سرعت انتقال آن و کوچک کردن هر چه بیشتر وسائل الکترونیکی و به خصوص ترانزیستورها دارای اهمیت بسیاری است زیرا کوچک تر شدن ابعاد وسائل الکترونیکی علاوه بر افزایش سرعت پردازش، توان مصرفی را نیز کاهش می دهد و نانو الکترونیک می تواند در رسیدن به ابعاد هر چه کوچک تر راهگشا باشد. برای آشنایی بیشتر با این فن آوری و درک عمیق تر پدیده های گوناگونی که در ابعاد نانو متر روی می دهد و در نتیجه تحلیل دقیق نتایج و اصلاح اصولی روش های آزمایش، باید علوم پایه ای نظیر فیزیک کوانتوم و مکانیک کوانتومی و فیزیک حالت جامد مورد مطالعه قرار بگیرند.
 نانو تکنولوژی یک رشته وابسته به ابزار است ابزارهایی که به مرور در حال بهتر شدن است نانو تکنولوژی و شاخه‌های کاربردی آن مانند نانوالکترونیک درواقع تولید کارآمد دستگاهها و سیستم‌ها با کنترل ماده در مقیاس طولی نانو است و بهره‌برداری از خواص و پدیده‌های نوظهوری است که در این مقیاس توسعه یافته است. صنعت الکترونیک امروزی مبتنی بر سیلیکون است سن این صنعت به حدود ۵۰ سال می‌رسد و اکنون به مرحله‌ای رسیده است که از لحاظ تکنولوژیکی، صنعتی و تجاری به بلوغ رسیده است. در مقابل این فناوری، الکترونیک مولکولی قرار دارد که در مراحل کاملاً ابتدایی است و قرار است این فناوری به عنوان آینده و نسل بعدی صنعت الکترونیک سیلیکونی مطرح شود. الکترونیک مولکولی دانشی است که مبتنی بر فناوری نانو بوده و کاربردهای وسیعی در صنعت الکترونیک دارد. با توجه به کاربردهای وسیع الکترونیک در محصولات تجاری بازار می‌توان با سرمایه‌گذاری و تامل بیشتر در فناوری نانو الکترونیک در آینده‌ای نه چندان دور شاهد سود‌دهی کلان محصولاتی بود که جایگزین فناوری الکترونیک سیلیکونی شده‌اند.
اهداف:
در دهه‌های اخیر شاهد پیشرفت‌های زیادی در زمینه افزایش قابلیت ذخیره اطلاعات روی حافظه‌ها و همچنین کاهش اندازه آن‌ها بوده‌ایم که نتیجه آن دو برابر شدن سرعت پردازش در عرض هر 18 ماه بوده است و این، انتظار تحولی عظیم در صنعت میکروالکترونیک را طی 15 سال آینده از نظر بنیادی و اقتصادی نوید می‌دهد. اکنون نیز تحقیقات ادامه داشته و هدف از آن تولید خواص نمونه و شکل ظاهری جدید و در نتیجه خلق نانوالکترونیک جدید است.
کاربرد نانوالکترونیک در صنعت:
با استفاده از این فناوری می‌توان ظرفیت ذخیره‌سازی اطلاعات را در حد ۱۰۰۰ برابر یا بیشتر افزایش داد که این نهایتاً به ساخت ابزارهای ابرمحاسباتی به کوچکی یک ساعت مچی منتهی می‌شود. ظرفیت نهایی ذخیره اطلاعات به حدود یک ترابیت در هر اینچ مربع رسده، و این امر موجب ذخیره‌ سازی ۵۰ عدد DVD یا بیشتر در یک هارد دیسک با ابعاد یک کارت اعتباری می‌شود. ساخت تراشه‌ها در اندازه‌های فوق‌العاده کوچک به‌عنوان مثال در اندازه‌های ۳۲ تا ۹۰ نانومتر، تولید دیسک‌های نوری ۱۰۰ گیگابایتی در اندازه‌های کوچک نیز از دیگر محصولات آن می‌باشد.
نمونه هایی از کاربرد فن آوری نانو در الکترونیک:
۱) نانو لوله های کربنی (carbon nanotubes)
نانو تیوب ها دارای فرم لوله ای با ساختار شش ضلعی هستند. نانو تیوب ها را می توان صفحات گرافیتی فرض کرد که لوله شده اند. بر اساس محور چرخش صفحات نانو تیوب ها می توانند رسانا یا نیمه رسانا نیز باشند. به علت اینکه کربن با سه پیوند همچنان دارای یک اوربیتال خالی p می باشد، حرکت موجی الکترون ها به راحتی در سطح بیرونی این لوله ها صورت می گیرد.


این ساختار کربنی علاوه بر رسانایی بالا دارای استحکام مکانیکی بسیار خوبی نیز می باشد . البته در کنار این مزایا مشکلاتی نیز وجود دارد. اغلب فرآیند های ساخت نانو تیوب ها به گو نه ای می باشند که امکان کنترل و نظارت کامل در طول فرآیند را برای شما امکان پذیر نمی کند . به عنوان مثال تعیین قطر دقیق و یکسان برای لوله های کشت شده در یک محیط، کنترل تولید نانو لوله های تک دیواره و یا چند لایه و یا ساخت نانو لوله های مستقیم و بدون خم شدگی با طول زیاد از مسائلی است که هنوز در فرآیند بهبود کیفیت تولید نیاز به مطالعه و تحقیقات بیشتری دارد. همچنین به علت پدیده تونل زنی الکترون که یک پدیده کوانتومی است امکان افزایش نشتی جریان و در نتیجه افزایش تلفات وجود دارد که بررسی روش های کاهش احتمال تونل زنی از جمله کارهایی است که می توان انجام داد. از کربن نانو تیوب هابه دلیل رسانایی بالا و مقاومت کم در دمای محیط در ساخت کانال هدایت ترانزیستورها، نوک میکروسکوپ های عکسبرداری در ابعاد نانو استفاده می شود.
2 ) نانو ترانزیستورها (nanotransistors)
طبق قانون مور( MOORE Law) تعداد ترانزیستورها در واحد سطح تراشه های الکترونیکی در هر بازه 10 تا 18 ماهه دو برابر می شود. نام فن آوری رایج امروز در ساخت ترانزیستورها، MOSFET می باشد که بر پایه استفاده از سیلیکون است. کوچکتر شدن ابعاد ترانزیستورها در MOSFET دارای مشکلاتی است که از جمله آن نشتی های جریان متفاوتی است که ایجاد می شود. یکی از روش های حل این مشکل ساخت تراتزیستورها با استفاده از نانو سختارها و به خصوص نانو تیوب ها می باشد.
3 ) محاسبه گر ها در مقیاس نانو ( nanocomputers)
امروزه در زمینه های مختلف از جمله فن آوری نانو پیوند میان رشته های مختلف علوم امری انکار ناپذیر به حساب می اید . از جمله نتایج این همکاری طراحی نانو محاسبه‌گرها می باشد. هیدرو کربن های آروماتیک از ریشه بنزن به علت وجود اوربیتال های p و ابر الکترونی در بالا و پایین آنها و همچنین پدیده رزونانس می توانند محیط انتقال خوبی برای الکترون باشند و بر عکس هیدروکربن های زنجیری مانند نارسانا عمل می کنند. از به هم پیوستن این هیدروکربن ها با هم می توان دیود، گیت های منطقی و مدارهای الکترونیکی را تشکیل دهند .
4 ) MRAMها ( Magnetic Random Access Memories )
فن آوری های روز حافظه ( RAM, Flash Memory, …) مشکلات متعددی را برای مصرف کنندگان آنها به وجود آورده است که به عنوان نمونه می توان به سرعت پایین خواندن و نوشتن روی Flash Memories و EEPROM و یا محدودیت اقتصادی افزایش فضای RAM اشاره نمود . MRAM یک فن آوری حافظه پایدار می باشد که علاوه بر سرعت بالا می تواند ظرفیت حافظه بالایی را نیز برای شما  فراهم کند. اساس کار MRAM بر پایه تفاوت مقاومت الکتریکی لایه های نازک مواد بر اثر قطبیده شدن ذرات آنها در راستاهای متفاوت می باشد؛ که به مقاومت مغناطیسی موسوم است. چون سلول های حافظه MRAM بر پایه ترانزیستور عمل نمی کنند پس در ابعاد کوچک مشکلاتی نظیر تونل زنی رخ نخواهد داد و می توان سلول های حافظه MRAM را تا ابعاد نانو کوچک کرد.
5 ) C60
از جمله نانو ساختارها که حتی نسبت به نانو لوله های کربنی دارای مزایای بیشتری نیز می باشد C۶۰ است. C۶۰ از ۱۲ پنج ضلعی و ۲۰ شش ضلعی تشکیل شده که به شکل متقارنی در کنار هم قرار گرفته اند.


مولکول های C۶۰ در محلول های بنزن یافت می شوند که با عمل تبخیر قابل استحصال می باشند. انواع ترکیبات C۶۰ با فلزات، نظیر K۳C۶۰ ، Cs۲RbC۶۰، که در آنها فلز فضای خالی درون C۶۰ را پر می کند دارای خاصیت ابر رسانایی در دماهای نسبتاً مناسب را دارا هستند ؛ البته تحقیقات برای دستیابی به ترکیباتی با خاصیت ابررسانایی در دماهای بالاتر همچنان ادامه دارد. کاربرد دیگر C۶۰ استفاده از آن به عنوان گیت های منطقی است. با لیتوگرافی طلا روی یک سطح سیلیکونی و عبور جریان از سیم های طلا یک صفحه مشبک ایجاد می شودکه فاصله بین اتصالات آن در حدود نانو متر است. محلول رقیق C۶۰ را بین اتصالات قرار می دهند به طوری که در هر فاصله یک C۶۰ قرار گیرد. با برقرار شدن جریان در سیم های طلا C۶۰ به علت یک پدیده کوانتومی شروع به نوسان می کند و به همین علت جریان در زمان های معینی بر قرار می شود از این خاصیت می توان در طراحی گیت های منطقی بهره نمود . 
تفاوت اساسی میان تکنولوژی ULSI و نانوتکنولوژی :
 تفاوت اساسی میان تکنولوژی ULSI و نانوتکنولوژی تفاوت میان روش پیاده سازی “بالا به پایین ” و ” پایین به بالا “برای تولید یک محصول است در روش بالا به پایین مساله اصلی هزینه بسیار زیاد کوچک تر کردن ابعاد ترانزیستورها با روش لیتوگرافی است ، در حالی که هدف اصلی تکنولوژی ULSI کاهش هزینه ها بر بیت در حافظه ها و هزینه بر سوییچ در مدارات منطقی بوده است. از آن سو در روش پایین به بالا انتظار می رود که با استفاده از روش های پیچیده شیمیایی و طراحی مولکولی بتوان بلوک های پایه سیستم را پیاده سازی کرد .اما مساله اصلی یکنواختی و قابلیت اطمینان سیستم در مقیاس وسیع است . اگر بتوان معماری فعلی مدارات مجتمع را بر اساس روش پایین به بالا و با قابلیت اطمینان بالا پیاده کرد ، نانو تکنولوژی اهمیت فوق العاده در توسعه صنعت IC پیدا میکند.
در تکنولوژی ULSI از آنجایی که کارآمدی سیستم مورد نظر است بیشترین درجه آزادی در طراحی سیستم و سپس طراحی مدار وجود دارد ، لذا فرآیند ساخت و ادوات نیمه هادی مثل ترانزیستورها کمترین تنوع را دارند . متقابلا در نانو تکنولوژی بلوکهای پایه متنوعی با کارآمدی بالا وجود دارند در حالی که معماری سیستم وارتباط بین بلوک ها به خوبی در نظر گرفته نشده است .به هر حال دو روش برای توسعه نانو الکترنیک متصور است . روش اول آن که نانو تکنولوژی با تکنولوژی موجود ULSI ترکیب شود . تلفیق رشته هایی مثل بیوتکنولوژی و الکترونیک ترکیب بازار صنعت داروسازی و صنعت نیمه هادی و نهایتا پیاده سازی سیستم های مجتمع که از مواد و اجزا متنوعی تشکیل شده اند از نتایج این روش به شمار 
می ایند. روش دوم آن که نانوتکنولوژی جایگزین تکنولوژی ULSI شود .این در صورتی مقدور خواهد بود که بتوان سیستم های فعلی را با کارکرد بهتر و قیمت پایین تر به روش پایین به بالا پیاده سازی کرد. 
نتیجه: 
آنچه که مسلم است، الکترونیک مولکولی دارای آینده‌ای درخشان است و با آهنگ بسیار سریعی در حال رشد و تکامل است. از این رو توجه خاصی را می‌طلبد . نتایج عملی رشد و توسعه شاخه‌های نانوتکنولوژی مانند نانوالکترونیک سبب ساخت تجهیزاتی خواهد شد که در مقایسه با گذشته اختلاف فاحش داشته و نسل کاملاً جدیدی با قابلیت‌های منحصر به فرد خواهد بود . نانو لوله‌ها و DNA به عنوان دو ابزار کارآمد در تولید محصولات نانوالکترونیک از اهمیت خاصی برخوردارند، ولیکن در این میان DNA به دلیل داشتن خواص محلی و وجود آن در بدن موجودات زنده از اهمیت بیشتری برخوردار است . نانوتکنولوژی و شاخه‌های کاربردی آن در علوم مختلف مانند نانوالکترونیک به عنوان پدیده‌هایی نوظهور هنوز قبل از تجاری سازی محصولاتشان، احتیاج به پیشرفت در هر دو زمینه علمی و تکنولوژیکی را دارد. با توجه به اینکه هم‌اکنون برخی از محصولات این فناوری در بازار وجود دارد پیش‌بینی اینکه کدامیک از محصولات آینده بهتری دارند (از نظر رقابتی) نیاز به بررسی بیشتر شاخصهای این فناروی در بخشهای 
صنعت و زیرمجموعه‌های این فناوری دارد .

کارهایی که باید در راستای پیشرفت این علم انجام شود:

نانو الکترونیک زمینه گسترده‌ای با پتانسیل ایجاد تغییرات بنیادی در علوم مختلف حتی در پزشکی است و انجام کارهای زیر برای پیشبرد آن می‌تواند مفید باشد:

  1.  فهم اصول انتقال در مقیاس نانو
  2.  گسترش فهم هرچه بهتر روش‌های خودچیدمانی(self assembly) ذرات برای انجام کارها به صورت ارزان‌تر، که این خود مستلزم حل مشکلات ارتباطی و جایگزینی در ترانزیستورهاست

برچسب‌ها: نانو الکترونیک اینتل نسل جدید الکترونیک به روز پیش
+ نوشته شده در ساعت توسط ... |

SMD Capacitors خازن های اس ام دی!!!

SMD capacitor basics

Surface mount capacitors are basically the same as their leaded predecessors. However instead of having leads they have metallised connections at either end.

This has a number of advantages:

  • Size:   SMD capacitors can be made very much smaller than their leaded relations. The fact that no wired leads are required means that different construction techniques can be sued and this allows for much smaller components to be made.
  • Ease of use in manufacturing:   As with all other surface mount components, SMD capacitors are very much easier to place using automated assembly equipment.
  • Lower spurious inductance:   The fact that no leads are required and components are smaller, means that the levels of spurious inductance are much smaller and these capacitors are much nearer the ideal component that their leaded relations.


Multilayer ceramic SMD capacitors

The multilayer ceramic SMD capacitors form the majority of SMD capacitors that are used and manufactured. They are normally contained in the same type of packages used for resistors.


MULTILAYER CERAMIC SMD CAPACITORS DIMENSIONS
SIZE DESIGNATIONMEASUREMENTS (MM)MEASUREMENTS (INCHES)
18124.6 x 3.00.18 x 0.12
12063.0 x 1.50.12 x 0.06
08052.0 x 1.30.08 x 0.05
06031.5 x 0.80.06 x 0.03
04021.0 x 0.50.04 x 0.02
02010.6 x 0.30.02 x 0.01

Construction:   The multilayer ceramic SMD capacitor consists of a rectangular block of ceramic dielectric in which a number of interleaved precious metal electrodes are contained. This multilayer structure gives rise to the name and the MLCC abbreviation, i.e. Multi-Layer Ceramic Capacitor.

This structure gives rise to a high capacitance per unit volume. The inner electrodes are connected to the two terminations, either by silver palladium (AgPd) alloy in the ratio 65 : 35, or silver dipped with a barrier layer of plated nickel and finally covered with a layer of plated tin (NiSn).

Ceramic capacitor manufacture:   The raw materials for the dielectric are finely milled and carefully mixed. Then they are heated to temperatures between 1100 and 1300°C to achieve the required chemical composition. The resultant mass is reground and additional materials added to provide the required electric properties.

The next stage in the process is to mix the finely ground material with a solvent and binding additive. This enables thin sheets to be made by casting or rolling.

For multilayer capacitors electrode material is printed on the sheets and after stacking and pressing of the sheets co-fired with the ceramic compact at temperatures between 1000 and 1400°C. The totally enclosed electrodes of a multilayer capacitor ceramic capacitor, MLCC guarantee good life test behaviour as well.


Tantalum SMD capacitors

Tantalum SMD capacitors are widely used to provide levels of capacitance that are higher than those that can be achieved when using ceramic capacitors. As a result of the different construction and requirements for tantalum SMT capacitors, there are some different packages that are used for them. These conform to EIA specifications.

Some SMD tantalum capacitors mounted on a printed circuit board. Note the bar across one end indicating the polarity.

Tantalum SMD capacitor 
. Note the bar across one end indicating the polarity


TANTALUM SMD CAPACITORS DIMENSIONS
SIZE DESIGNATIONMEASUREMENTS (MM)EIA DESIGNATION
Size A3.2 x 1.6 x 1.6EIA 3216-18
Size B3.5 x 2.8 x 1.9EIA 3528-21
Size C6.0 x 3.2 x 2.2EIA 6032-28
Size D7.3 x 4.3 x 2.4EIA 7343-31
Size E7.3 x 4.3 x 4.1EIA 7343-43


Electrolytic SMD capacitors

Electrolytic capacitors are now being used increasingly in SMD designs. Their very high levels of capacitance combined with their low cost make them particularly useful in many areas.

Often SMD electrolytic capacitors are marked with the value and working voltage. There are two basic methods used. One is to include their value in microfarads, µF, and another is to use a code. Using the first method a marking of 33 6V would indicate a 33 µF capacitor with a working voltage of 6 volts. An alternative code system employs a letter followed by three figures. The letter indicates the working voltage as defined in the table below and the three figures indicate the capacitance on pico-farads. As with many other marking systems the first two figures give the significant figures and the third, the multiplier. In this case a marking of G106 would indicate a working voltage of 4 volts and a capacitance 0f 10 times 10^6 pico-farads. This works out to be 10µF 

ELECTROLYTIC SMD CAPACITOR CODES
LETTER CODEVOLTAGE
e2.5
G4
J6.3
A10
C16
D20
E25
V35
H50


SMD capacitor codes

Comparatively few SMD capacitors have their values marked on their cases. This means that great care must be taken when handling them to ensure they are not misplaced or mixed. However a few capacitors do have markings. The capacitor values are coded. This means that it is necessary to know the SMD capacitor codes. These are simple and easy to decode.

A three figure SMT capacitor code is normally used as there is usually little space for anything more. In common with other marking codes the first two indicate the significant figures, and the third is a multiplier.

The outline of a typical SMD capacitor showing its marking code.

SMD capacitor code

Here the two figures 47 indicate the significant figures and the 2 indicates the multiplier of 2, i.e. 100.

+ نوشته شده در ساعت توسط ... |

electronic cir   ترجمه به انگلیسی translated to english

Getting Started

1. Getting Started

To highlight the importance of electrical engineering knowledge retention is necessary to evening the most used form of energy is electricity in different areas. Currently, toatesectoarele activity is used, in various ways, electricity. Deacţionare motors of various tools, machinery and transport equipment converts electrical energy into energiemecanică, electric lamps converts light into energy, electric ovens transforms it into energietermica (heat) to melt, heated or dried. If we consider the use of electricity întelecomunicaţii in automation, home appliances, resulting in the vast field of energy aceastăformă finds its use.

1.1. A classification of electrical quantities

a) After the presence or absence of their energy: active Size: sizes are those associated with energy, some of which can be used for measuring înprocesul. The ratio between the total energy, which has the size and power that folosităpentru measurement should be large so as not to affect the value of assets size măsurate.Exemplu sizes: temperature, voltage, electrical current, passive putereaelectrică.Mărimi : are those quantities which possesses its own energy liberia. For their measurement is necessary to resort to an auxiliary power source. Size Examples passive resistance, capacity, inductance, etcb) After-dimensional appearance space: scalars: completely determined by a single vector număr.Mărimi: characterized by: mode (intensity) and direction sens.c) After how variation over time:

 

 

MărimeaelectricăConstantă VariabilăDeterministăAleatoareNe eriodicăPeriodicăSinusoidală (aperiodic AlternativăPulsatorie

Fig.1.1. Classification quantities measured by how timpMărimea variaţieîn constant: it is the size which changes its value over time, with only two parameters, amplitude and deterministic polaritate.Mărimea: is that size whose evolution over time is predictable and can be descrisăprintr a mathematical function and that unpredictability comes in a small random măsură.Mărimea: unpredictable variation, the values ​​they take different time points fiindîntâmplătoare. These sizes can not be characterized only in probabilistic sense using methods statistice.5


 

Getting Started

The mean (DC component) of random sizes within a certain time t

1

-T

2

is given by (1.2) and the actual value of the relationship (1.3).

x (t)) tt (1 = X

TTM is d

-

21

12

(1.2)

(T) d

21

12

x) tt (1 = X

2tte f

-

(1.3) where t

2

-T

1

is the integration time or measurement time.

x (t) tX

med

Fig.1.2. Regular aleatoareMărimea size: has the property that the values ​​they take at certain times, repeat dupăintervale equal time. Thus for a regular size, value instantaneous (momentary), x (t), satisfies the relation:

() ()

Ttxtx

± =

(1.4) where T is the period and f = 1 / T is frecvenţa.Mărimea periodic in time can be described as a function of amplitude, frequency, period and frequency fază.Analiza in these sizes is done using Fourier series, resulting discret.Valoarea defrecvente spectrum average (DC component) of a regular size is:

x (t) T1 = X

T + TTM e d

0

0

(1.5) Another parameter used to characterize periodic quantities is the actual value:

(T) d

0

XT1 = X

2Ttte f

0

+

(1.6) 6



 

Getting Started

1.1.Să application determine the average value and the actual value of the periodic signal of Fig.1.3.

 

Tax (t)

τ

 T

Fig.1.3. Signal dreptungiular alternative size:

 

is the regular size which averaged over a period is zero. The electrical maiîntâlnite alternative sizes are shown in Fig.1.4.

 

b) square wave) sine wave c) triangular wave d) sawtooth wave egalearii areas egalearii equal egalearii

Fig.1.4. AlternativeFaţă main waveform and DC voltage, whose values ​​are generally stable over time, tensiuneaalternativă alternate polarity (Fig.1.4) and alternating current alternating in direction (Fig.1.5).

 a) b)

Fig.1.5. DC (a) and AC (b) One way to express the intensity or magnitude of a quantity alternative is to measure

peak

or

peak to peak value

(Fig.1.6).

X

max

 

X

VV

= 2Xmax

 

Time

 

Fig.1.6. Maximum and peak to peak value of a quantity alternativeDin Unfortunately each of these values ​​can mislead us if we compare two different wavelengths. Thus, otensiune rectangular 10V peak value is clearly higher than the value for a voltage of 10V devârf triangular effect of these two voltages that supply the same pregnancy fiinddiferit (Fig.1.7) .7

+ نوشته شده در ساعت توسط ... |

1. NOŢIUNI INTRODUCTIVEPentru a sublinia importanta insuşirii cunostinţelor din domeniul ingineriei electrice, este necesar sa searate că forma de energie cea mai utilizată în diferite domenii este energia electrică. În prezent, în toatesectoarele de activitate se foloseşte, în cele mai diverse moduri, energia electrică. Motoarele electrice deacţionare a diferitelor utilaje, maşini şi mijloace de transport transformă energia electrică în energiemecanică, lampile electrice o transformă in energie luminoasă, cuptoarele electrice o transformă în energietermica (caldură) pentru topit, încalzit sau uscat. Dacă se consideră şi utilizarea energiei electrice întelecomunicaţii, în automatizări, în aparatele electrocasnice, rezultă domeniul foarte vast în care aceastăformă de energie îşi găseşte utilizarea.1.1. O clasificare a mărimilor electricea) După prezenţa sau absenţa unei energii proprii:Mărimi active: sunt acele mărimi care au asociată o energie, din care o parte poate fi utilizată înprocesul de măsurare. Raportul între energia totală, pe care o posedă mărimea respectivă şi energia folosităpentru măsurare trebuie să fie cât mai mare, astfel încât să nu se afecteze valoarea mărimii măsurate.Exemplu de mărimi active: temperatura, tensiunea electrică, intensitatea curentului electric, putereaelectrică.Mărimi pasive: sunt acele mărimi care nu posedă o energie proprie liberabilă. Pentru măsurarea lor este necesar să se recurgă la o sursă de energie auxiliară. Exemple de mărimi pasive: rezistenţa,capacitatea, inductivitatea, e.t.c.b) După aspectul dimensional-spaţial:Mărimi scalare: complet determinate printr-un singur număr.Mărimi vectoriale: caracterizate prin: modul (intensitate), direcţie şi sens.c) După modul de variaţie în timp:  MărimeaelectricăConstantă VariabilăDeterministăAleatoareNe eriodicăPeriodicăSinusoidală(Aperiodică AlternativăPulsatorieFig.1.1. Clasificarea mărimilor de măsurat după modul de variaţieîn timpMărimea constantă: este acea mărime care nu îşi modifică valoarea în timp, având doar doi parametrii,amplitudine şi polaritate.Mărimea deterministă: este acea mărime a cărei evoluţie în timp este previzibilă, putând fi descrisăprintr-o funcţie matematică şi la care imprevizibilul intervine într-o mică măsură.Mărimea aleatoare: prezintă variaţii neprevizibile, valorile pe care le ia în diverse momente de timp fiindîntâmplătoare. Aceste mărimi nu pot fi caracterizate decât în sens probabilistic cu ajutorul metodelor statistice.5 Noţiuni introductiveValoarea medie (componenta continuă) a unei mărimi aleatoare, într-un anumit interval de timp t1-t2este dată de relaţia (1.2), iar valoarea efectivă de relaţia (1.3).x (t))tt(1=Xttm e d∫ −2112(1.2)(t)d2112x)tt(1=X2tte f ∫ −(1.3)unde t2-t1reprezintă timpul de integrare sau timpul de măsură.x(t)tXmedFig.1.2. Mărime aleatoareMărimea periodică: are proprietatea că valorile pe care le ia la anumite momente, se repetă dupăintervale egale de timp. Astfel pentru o mărime periodică, valoarea sa instantanee (momentană), x(t),satisface relaţia:( ) ( )Ttxtx±=(1.4)unde T este perioada şi f=1/T este frecvenţa.Mărimea periodică poate fi descrisă în domeniul timp ca funcţie de amplitudine, frecventa, perioada si fază.Analiza în domeniul frecventa a acestor mărimi se face cu ajutorul seriei Fourier, rezultând un spectru defrecvente discret.Valoarea medie (componenta continuă) a unei mărimi periodice este:x (t)T1=XT+ttm e d0∫ 0(1.5)Un alt parametru utilizat pentru caracterizarea mărimilor periodice este valoarea efectivă:(t)d0xT1=X2Ttte f 0∫ +(1.6)6Getting Started1. Getting StartedTo highlight the importance of electrical engineering knowledge retention is necessary to evening the most used form of energy is electricity in different areas. Currently, toatesectoarele activity is used, in various ways, electricity. Deacţionare motors of various tools, machinery and transport equipment converts electrical energy into energiemecanică, electric lamps converts light into energy, electric ovens transforms it into energietermica (heat) to melt, heated or dried. If we consider the use of electricity întelecomunicaţii in automation, home appliances, resulting in the vast field of energy aceastăformă finds its use.1.1. A classification of electrical quantitiesa) After the presence or absence of their energy: active Size: sizes are those associated with energy, some of which can be used for measuring înprocesul. The ratio between the total energy, which has the size and power that folosităpentru measurement should be large so as not to affect the value of assets size măsurate.Exemplu sizes: temperature, voltage, electrical current, passive putereaelectrică.Mărimi : are those quantities which possesses its own energy liberia. For their measurement is necessary to resort to an auxiliary power source. Size Examples passive resistance, capacity, inductance, etcb) After-dimensional appearance space: scalars: completely determined by a single vector număr.Mărimi: characterized by: mode (intensity) and direction sens.c) After how variation over time:  MărimeaelectricăConstantă VariabilăDeterministăAleatoareNe eriodicăPeriodicăSinusoidală (aperiodic AlternativăPulsatorieFig.1.1. Classification quantities measured by how timpMărimea variaţieîn constant: it is the size which changes its value over time, with only two parameters, amplitude and deterministic polaritate.Mărimea: is that size whose evolution over time is predictable and can be descrisăprintr a mathematical function and that unpredictability comes in a small random măsură.Mărimea: unpredictable variation, the values ​​they take different time points fiindîntâmplătoare. These sizes can not be characterized only in probabilistic sense using methods statistice.5 Getting StartedThe mean (DC component) of random sizes within a certain time t1-T2is given by (1.2) and the actual value of the relationship (1.3).x (t)) tt (1 = XTTM is d∫-2112(1.2)(T) d2112x) tt (1 = X2tte f∫-(1.3) where t2-T1is the integration time or measurement time.x (t) tXmedFig.1.2. Regular aleatoareMărimea size: has the property that the values ​​they take at certain times, repeat dupăintervale equal time. Thus for a regular size, value instantaneous (momentary), x (t), satisfies the relation:() ()Ttxtx± =(1.4) where T is the period and f = 1 / T is frecvenţa.Mărimea periodic in time can be described as a function of amplitude, frequency, period and frequency fază.Analiza in these sizes is done using Fourier series, resulting discret.Valoarea defrecvente spectrum average (DC component) of a regular size is:x (t) T1 = XT + TTM e d0∫0(1.5) Another parameter used to characterize periodic quantities is the actual value:(T) d0XT1 = X2Ttte f0∫+(1.6) 6 Getting Started1.1.Să application determine the average value and the actual value of the periodic signal of Fig.1.3. Tax (t)τ TFig.1.3. Signal dreptungiular alternative size: is the regular size which averaged over a period is zero. The electrical maiîntâlnite alternative sizes are shown in Fig.1.4. b) square wave) sine wave c) triangular wave d) sawtooth wave egalearii areas egalearii equal egaleariiFig.1.4. AlternativeFaţă main waveform and DC voltage, whose values ​​are generally stable over time, tensiuneaalternativă alternate polarity (Fig.1.4) and alternating current alternating in direction (Fig.1.5). a) b)Fig.1.5. DC (a) and AC (b) One way to express the intensity or magnitude of a quantity alternative is to measurepeakorpeak to peak value(Fig.1.6).Xmax XVV= 2Xmax Time Fig.1.6. Maximum and peak to peak value of a quantity alternativeDin Unfortunately each of these values ​​can mislead us if we compare two different wavelengths. Thus, otensiune rectangular 10V peak value is clearly higher than the value for a voltage of 10V devârf triangular effect of these two voltages that supply the same pregnancy fiinddiferit (Fig.1.7) .7 Noţiuni introductiveAplicaţia 1.1.Să se determine valoarea medie şi valoarea efectivă a semnalului periodic din Fig.1.3. tAx(t)τ TFig.1.3. Semnal dreptungiular Mărimea alternativă: este acea mărime periodică a cărei valoare medie pe o perioadă este nulă. Cele maiîntâlnite mărimi alternative în domeniul electric sunt prezentate în Fig.1.4. b) Undă dreptunghiulară a) Undă sinusoidală c) Undă triunghiulară d) Undă în dinţi de fierăstrău arii egalearii egalearii egalearii egaleFig.1.4. Principalele forme de undă alternativeFaţă de tensiunea şi de curentul continuu, ale căror valori în timp sunt în general stabile, tensiuneaalternativă alternează în polaritate (Fig.1.4), iar curentul alternativ alternează în direcţie (Fig.1.5). a) b)Fig.1.5. Curent continuu (a) şi curent alternativ (b)O modalitate de a exprima intensitatea sau amplitudinea unei mărimi alternative constă în măsurareavalorii de vârf sau avalorii vârf la vârf (Fig.1.6).Xmax XVV= 2Xmax Timp Fig.1.6. Valoarea maximă şi valoarea vârf la vârf a unei mărimi alternativeDin păcate fiecare dintre aceste valori ne pot înşela dacă comparăm două tipuri diferite de undă. Astfel, otensiune dreptunghiulară cu valoarea de vârf de 10V este clar o valoare mai mare în timp decât valoarea devârf de 10V a unei tensiuni triunghiulare, efectul acestor două tensiuni ce alimentează aceiaşi sarcină fiindddiferit (Fig.1.7).7
+ نوشته شده در ساعت توسط ... |

?What is a Wireless Network's SSID

An SSID is a 32-character alphanumeric key uniquely identifying a wireless LAN. Its purpose is stop other wireless equipment accessing your LAN — whether accidentally or intentionally. To communicate, wireless devices must be configured with the same SSID. Most NETGEAR products have a "site survey" tool that automatically looks for other wireless devices, and displays their SSID.

If you unselect Allow broadcast of SSID in a router or access point, the SSID of that device will not be visible in another device's site survey, and must be entered manually.

The SSID is not a strong security measure, and should be used in conjunction with other security such as WEP or WPA.

The Extended Service Set Identification (ESSID) is one of two types of Service Set Identification (SSID). An Ad-hoc wireless network with no access points uses the Basic Service Set Identification (BSSID). In an infrastructure wireless network that includes an access point, the Extended Service Set Identification (ESSID) is used — although it may still be referred in a loose sense as SSID. Some vendors refer to the SSID as the "network name".

+ نوشته شده در ساعت توسط ... |

what is PLC(پی ال سی چیست)
فناوري PowerLine Communications يا اصطلاحاً PLC به کليه ارتباطاتي که از طريق خطوط انتقال نيرو – برق صورت پذيرد اطلاق مي گردد . صورت ديگر اين فناوري را که در سطح ايجاد زيرساخت باند پهن مورد استفاده قرار مي گيرد

    کاربرد دستگاه : دستگاه AV200 Desktop امکان ايجاد شبکه اي منسجم جهت نتقل و انتقال داده ها ، صوت و تصوير اینترنت بر روي شبکه برق ايجاد نمايد . تنها کافي است هر دستگاه را به يک پريز برق متصل نموده و آنها را از طريق Ethernet Port به هر PC يا لپ تاپی که داراي قابليت اتصال به پورت Ethernet را دارا باشد متصل نموده ، سپس اینترنتی پر سرعت بر روي کابل هاي برق داشته باشيد
    ويژگی های اجمالی
  • امکان ايجاد پهناي باند 2Mbps به صورت Full Duplex بر روی کابل های برق
  • پشتيباني از Multicasting & IGMP Snooping
  • پشتيباني از نقل و انتقال داده ، صوت و تصوير
  • 802.1Q VLAN & Optimized VLANs
  • QoS,8 Level of queuing ( 802.1p )
  • سيستم کدگذاري 168bit – 3DES
  • قابليت ايجاد پوشش و دسترسي در سرار منزل و يا دفتر کار
  • داراي قابليت مديريت از راه دور
  • Plug & Play
+ نوشته شده در ساعت توسط ... |

بهینه سازی تقویت کننده های فیبر نوری

 

                بهينه سازي تقويت كنندههاي فيبر نوري رامان پمپ شده به وسيله پمپهاي گوسي

بهینه سازی تقویت کننده های فیبر نوری

 

چكيده: استفاده از پمپهاي باند وسيع در طراحي تقويت كنندههاي فيبر نوري رامان اخيراً از اهميت ويژهاي برخوردار شده است. از

آنجا كه ليزرهاي نيمه هادي پر قدرت باند وسيع تقريباً داراي طيف گوسي هستند، در اين مقاله براي كمينه سازي اعوجاج بهره از

تركيب چند پمپ باند وسيع استفاده ميشود. توان و طول موج مركزي هر كدام از پمپهاي گوسي براي حالت بهينه بر اساس روش

50 و nm وردشي انتخاب ميشوند. محاسبات براي بهينه سازي اعوجاج بهره يك تقويت كننده رامان به طول 25 كيلومتر، پهناي باند

0/25 بدست آمد. dB 4/1 انجام شد و اعوجاج بهره dB متوسط بهره

-1 مقدمه

تقويت كنندههاي رامان كه بر اساس اثر رامان القائي كار

ميكنند در سيستمهاي مخابرات نوري بسيار مورد توجه

هستند[ 1]. مزيت اين تقويت كنندهها نسبت به تقويت

كنندههاي نيمه رسانا و آلاييده به اربيوم اين است كه هر

سيگنال با هر طول موجي را ميتوان به وسيلهي اين تقويت

كنندهها تقويت كرده و از خود فيبر به عنوان يك محيط

بهره استفاده ميشود. علاوه بر اين تقويت كنندههاي رامان

از نسبت سيگنال به نوفه بالايي برخوردار هستند و با

استفاده از پمپهاي مناسب ميتوان پهناي باند طيف بهره

.[ آنها را گسترده كرد[ 2

در گسترش پهناي باند بهره تقويت كنندهها مهمترين

مسئله تخت كردن و كاهش اعوجاجهاي طيف بهره است.

براي انجام اين امر معمولاً از تعداد زيادي پمپ استفاده مي-

20 پمپ). البته از هر پمپ با طول موج و - شود(تعداد 4

توان دلخواه نميتوان استفاده كرد و بايد توان و طول موج

پمپها را به گونهاي انتخاب كرد كه اعوجاجهاي بهره كمينه

شود[ 3]. به منظور كم كردن تعداد پمپها و كاهش هزينه

يا پمپهايي با پهناي باند گسترده WDM ميتوان از روش

استفاده كرد[ 4]. راه ديگر براي تخت كردن تابع بهره رامان

استفاده از پمپهايي است كه طول موج آنها بر روي يك

پهناي وسيع به سرعت قابل تنظيم باشد[ 5]. با اين وجود

اين روش هنوز به صورت واقعي و عملي مورد استفاده قرار

نگرفته است.

روشهاي استفاده از چند پمپ و پمپهايي با پهناي باند

گسترده در تقويت كنندههاي رامان به روش استفاده از يك

پمپ با طيف پيوسته پيشرفت كرد. بهترين تخت كردن تابع

.[ بهره را با استفاده از اين روش ميتوان بدست آورد[ 6] و[ 7

اما شكل پمپ بدست آمده در مرجع [ 6] بسيار پيچيده بوده

و بدست آوردن آن به صورت عملي مشكل است. بنابراين

تلاش براي پيدا كردن شكل پمپ پيوستهاي كه اعوجاجهاي

طيف بهره را كمينه كند و از نظر عملي قابل دسترسي

باشد، جالب به نظر ميرسد. معمولاً از سه شيوه براي بهينه

سازي استفاده ميشود: 1- ثابت نگهداشتن توان پمپها و

تغيير طول موجها، 2- ثابت نگه داشتن طول موجها و

تغيير توانها و 3- تغيير طول موجها و تغيير توانها و بهينه

كردن آنها [ 7]. از آنجا كه ليزرهاي نيمه هادي باند وسيع

15- داراي طيف گوسي با پهناي باند در نيمه بيشينه 10

نانومتر هستند، در اينجا فرض ميشود كه توان اوليه پمپ

به صورت تركيبي از چند پمپ گوسي با پهناي باند 10

نانومتر باشد. براي شبيه سازي پمپ پيوسته از تعداد زيادي

پمپ تكفام (گسسته) استفاده مي شود[ 8]. در كارهاي قبلي

انجام شده ابتدا شكل منحني پمپ پيوسته بدست ميآمد و

بعد با برازش گوسي روي منحني مورد نظر، شكل پمپي كه

2

از نظر عملي قابل دسترسي باشد، بدست ميآمد[ 8] و [ 9] و

به SMF- 10 ]. بر اساس گزارشات قبلي، براي يك فيبر 28 ]

طول 100 كيلومتر، اعوجاج بهره بدست آمده براي دو شكل

0/4 است [ 9]. در صورتي dB 0/2 و dB مختلف پمپ پيوسته

، كه اگر از چهار پمپ گسسته در طول موجهاي 1420

1453 و 1480 نانومتر استفاده كنيم اعوجاج ،1435

به طول 25 SMF- 0/8 است [ 9]. براي فيبر 28 dB بهره

كيلومتر اعوجاج بهره بدست آمده با تركيب چهار پمپ

0/3 با متوسط بهره بيشتر از صفر دسي بل dB گوسي

است[ 8]. اما در اين مقاله از ابتدا فرض ميشود كه توان

اوليه پمپ به صورت تركيبي از چند پمپ گوسي با پهناي

باند 10 نانومتر باشد و بعد طول موجهاي مركزي و دامنه

اين پمپهاي گوسي را به گونهاي انتخاب ميكنيم كه طيف

بهره داراي كمينه اعوجاج باشد. براي اين منظور از روش

وردشي و ضرايب نامعين لاگرانژ استفاده ميشود.

-2 معادلههاي حاكم

معادلههاي مربوط به انتشار توان پمپ و سيگنال با در نظر

:[ گرفتن اثر رامان به صورت زير است[ 11

1 (1)

2

1

( )

( ) ( ) ( , ) ( )

( , ) ( ) ; 1, 2,..,

s

m

N

i

i i i j j

j

N

i j j s

j

dS z

S z g S z

dz

g Q z i N

α ν ν ν

ν ν

=

=

⎛⎜

= − + +

⎜⎝

⎞⎟

=

⎟⎠ Σ

Σ

1 (2)

1

( )

( ) ( ) ( , ) ( )

( , ) ( ) ; 1,2,..,

s

m

N

i i i i jj

j

N

i j j m

j

dP z

P z g S z

dz

g Pz i N

α ν ν ν

ν ν

=

=

⎛⎜

= − −

⎜⎝

⎞⎟

=

⎟⎠

Σ

Σ

توان Pi (z) ، ام i توان سيگنال Si (z) در معادلههاي بالا

نمونه i پمپ ام ، i

ν فركانس سيگنال i ، ام i

ν فركانس

تعداد Ns ، ضريب اتلاف در محيط فيبر α ، ام i نمونه

ثابت بهره رامان است كه g تعداد نمونهها و Nm ، سيگنالها

:[ به صورت زير تعريف ميشود[ 12

(3)

2 ( )

;

( , ) 0 ;

( )

. ;

i ref i j

i j

j ref eff

i j i j

j ref j i

i j

reff eff

g

A

g

g

A

ν ν ν

ν ν

ν ν

νν ν ν

ν ν ν

ν ν

ν

⎧ −

⎪− >

Γ ⎪⎪

= = ⎨⎪

⎪ − ⎪ < ⎩ Γ

j ν و i

ν فركانس موج هاي بر هم كنش كننده در اثر رامان

براي به حساب آوردن اثر كاتورهاي بودن Γ هستند. عامل

سطح مقطع موثر هسته فيبر و Aeff . جهت قطبش است

ثابت رامان اندازه گيري شده در فركانس gref (ξ η )

gref ν ) است. طيف ثابت بهره رامان ν reff پمپ مرجع

.[ در شكل ( 1) نمايش داده شده است[ 12

0 5 10 15 20 25 30 35

0

1

2

3

4

5

6

x 10-14

Pump-signal frequency difference,THz

Raman gain cofficient,m / W

[12]200THz شكل 1: طيف ثابت بهره رامان در فركانس پمپ

در معادله ( 1) جمله اول تضعيف سيگنال در طول فيبر،

جمله دوم برهم كنش سيگنال- سيگنال و جمله سوم برهم

كنش نمونههاي پمپ و سيگنال را نشان ميدهد. در معادله

2) جمله اول تضعيف نمونهاي از پمپ در طول فيبر، جمله )

دوم بر هم كنش سيگنال نمونهاي از پمپ و جمله سوم بر

هم كنش نمونهاي از پمپ با نمونهاي ديگر از پمپ را نشان

ميدهد. همان گونه كه در معادله ( 2) ديده ميشود فرض

بر اين است كه عمل پمپ كردن در جهت معكوس 1 انجام

را به صورت زير تعريف ميكنيم: F ميشود. حال تابع هدف

(4)

_

2

1

( )

Ns

k

k

F G G

=

=Σ

(5)

( )

(0)

k

k

k

S L

G

S

=

) 6 ( Σ=

=

Ns

k

k

s

G

N

G

1

_ 1

نشان دهنده مجموع مربعات اختلاف بهره در هر فركانس F

ام، k بهره مربوط به سيگنال Gk ، نسبت به متوسط بهره

طول فيبر است. در واقع L متوسط بهره تقويت كننده و G

1 backward

3

با توجه به قيدهاي موجود است. از F هدف ما كمينه سازي

آنجا كه توان پمپها و نمونههايي از آن كه انتخاب ميكنيم

كميتي مثبت هستند براي جلوگيري از بدست آمدن جواب

را Pk (z) = Qk2 (z) منفي براي نمونههاي پمپ، تغيير متغير

يك تابع حقيقي است. معادله- Qk (z) انجام ميدهيم كه

باز نويسي ميشود. در انتها Qk (z) هاي مربوطه بر حسب

2 ( ) را به عنوان نمونهاي از توان پمپ معرفي كرده و Qk z

براي اينكه از صحيح بودن جواب اطمينان حاصل شود آن را

در معادله ( 1) و ( 2) گذاشته و منحني بهره مربوطه رسم

مي شود. پس معادلههاي( 1) و ( 2) را ميتوان به صورت زير

با زنويسي كرد:

1 (7)

2

1

( )

( ) ( ) ( , ) ( )

( , ) ( ) ; 1, 2,..,

s

m

N

i

i i i j j

j

N

i j j s

j

dS z

S z g S z

dz

g Q z i N

α ν ν ν

ν ν

=

=

⎛⎜

= − + +

⎜⎝

⎞⎟

=

⎟⎠

Σ

Σ

1 (8)

2

1

( ) ( ) ( ) ( , ) ( )

2

( , ) ( ) ; 1,2,..,

s

m

N

i i

i i j j

j

N

i j j m

j

dQ z Q z g S z

dz

g Q z i N

α ν ν ν

ν ν

=

=

= − − ⎜⎝

= ⎟⎠

Σ

Σ

Si (0) = Si0 ;i = 1, 2,..., Ns (9)

(10)

2 2 2

1

( ) exp( ( ) ); 1,2,...,

Np

i k i k m

k

Q L E α λ λ i N

=

=Σ − − =

رابطه ( 10 ) نشاندهنده اين است كه از ابتدا فرض شده طيف

پمپ گوسي Np توان پمپ ورودي به صورت تركيبي از

باشد در صورتي در مرجع [ 10 ] از ابتدا چنين قيدي بر

مسئله اعمال نميشود. براي معادلههاي ( 7) و ( 8) به ترتيب

در نظر گرفته و تابع هدف βk ,γ k ضرايب نامعين لاگرانژ

تعميم يافته را به صورت حاصل جمع تابع هدف و

( حاصلضرب ضرايب نامعين لاگرانژ در معادلههاي ( 7) و ( 8

:[ تعريف ميكنيم[ 13

(11)

(

0

1

2

1 1

0

1

2

1 1

( )

( ) ( ) ( )

( , ) ( ) ( , ) ( )

( ) 1 ( ) ( ) ( )

2

( , ) ( ) ( , ) ( )

s

s m

m

s m

N L i

i i

i i

N N

i j j i j j

j j

N L i

i i i

i

N N

i j j i j j

j j

dS z

J F z S z

dz

g S z g Q z dz

dQ z

z Qz

dz

g S z g Q z dz

γ αν

ν ν ν ν

β αν

ν ν ν ν

=

= =

=

= =

= + + − ⎢⎣

⎞⎤

− ⎟⎥

⎥ ⎟⎠

⎡ ⎛

+ ⎢ + ⎜⎜− ⎣ ⎝

⎞⎤

+ + ⎟⎥

⎥ ⎟⎠

Σ ∫

Σ Σ

Σ ∫

Σ Σ

نسبت به متغيرهاي موجود وردش گرفته و آن را J حال از

از δ Qi (L) مساوي با صفر قرار ميدهيم. در اينجا به جاي

رابطه زير استفاده ميكنيم:

(12)

2

1

2

2

( ) exp( ( ) )

( )

( )exp( ( ));

( )

1, 2,...,

Np

k

i i k

k i

k

k i k i k

i

m

E

Q L

Q L

E

Q L

i N

δ

δ αλ λ

αδλ λ λ α λ λ

=

= − − + ⎢⎢⎣

− − − − ⎥⎥⎦

=

Σ

معادلههاي حاكم بر ضرايب δ J با مساوي صفر قرار دادن

.[ نامعين لاگرانژ و شرايط مرزي مربوطه بدست ميآيد[ 13

(13)

1

2

1 1

1

( )

( ) ( ) ( , ) ( )

( , ) ( ) ( , ) ( ) ( )

1 ( , ) ( ) ( ) 0; 1,2,...,

2

s

m s

m

N

i

i i i j j

j

N N

i j j j i j j

j j

N

j i j j s

j

d z

z g S z

dz

g Q z g S z z

g Q z z i N

γ

γ αν ν ν

ν ν ν ν γ

ν ν β

=

= =

=

⎛⎜

− + − −

⎜⎝

⎞⎟

⎟⎠

+ = =

Σ

Σ Σ

Σ

(14)

2

1

1 1

1

( ) ( )

( ) ( , ) ( )

2

( , ) ( ) ( ) ( , ) ( ) ( )

2 ( , ) ( ) ( ) 0; 1,2,...,

m

s m

s

N

i i i i j j

j

N N

i j j i j i j j

j j

N

j i j j m

j

d z z

g Q z

dz

g S z Q z g zQ z

g S z z i N

β β

α ν ν ν

ν ν ν ν β

ν ν γ

=

= =

=

⎛⎜

− + − + +

⎜⎝

⎞ ⎛

⎟ + ⎜

⎟ ⎜

⎠ ⎝

− = = ⎟⎟⎠

Σ

Σ Σ

Σ

(15)

_

1

2 2 2 ( ) ( );

(0) (0) (0)

1, 2,...,

Ns

i

i j i

i s i j i

s

G

L G G G

S NS S

i N

γ

=

= + =− −

=

Σ

βi (0) = 0;i = 1, 2,..., Nm (16)

(17)

2

1

( )

exp( ( ) ) 0;

( )

1, 2,...,

Nm

i

k i k

i i

p

L

E

Q L

k N

β

α λ λ

=

− − =

=

Σ

(18)

2 2

1

( )

( )exp( ( )) 0;

( )

1, 2,...,

Nm

i

k i k i k

i i

p

L

E

Q L

k N

β

λ λ α λ λ

=

− − − =

=

Σ

4

از آنجا كه مقدار اوليه سيگنالها مشخص است

0) يعني ( ) 0 i S δ = ( 0) هيچ شرط مرزي خاصي براي ) i

γ

بدست نميآيد و مقدار آن با حل معادلههاي مربوطه بدست

ميآيد. تعداد معادلههاي ديفرانسيل بدست آمده برابر با

2 است (Nm + Ns ) 2 است. تعداد شرايط مرزي (Nm + Ns )

ولي همانگونه كه مي دانيم در رابطه شرايط مرزي مربوط به

2 مجهول داريم و تعداد معادلههاي Np ها به تعداد Qi (L)

2 است بنابراين اين دستگاه Np جبري ( 17 ) و ( 18 ) برابر با

معادله قابل حل است.

-3 محاسبات عددي

به طول 25 SMF براي انجام محاسبات از يك فيبر 28

كيلومتر استفاده ميشود كه منحني سطح مقطع و ضريب

.[ اتلاف آن در شكل 2 نشان داده شده است[ 8

1400 1450 1500 1550 1600

0

0.05

0.1

Wavelength,nm

Attenuation cofficint,1/km 70

80

90

Effective area, μm2

effective area

attenuation cofficient

بر SMF شكل 2: منحني سطح مقطع و ضريب تضعيف براي 28

[ حسب طول موج[ 8

در انجام محاسبات فرض مي شود كه همه سيگنالها داراي

0.1 است mW توان اوليه مساوي هستند و مقدار آن برابر با

. Sk (0) = 0.1mW = −10dBm : يعني

معادلههايي كه با آنها سر و كار داريم يك دستگاه معادله-

هاي ديفرانسيل و جبري غير خطي به هم جفت شده

هستند. براي حل اين معادلهها به صورت زير عمل ميكنيم:

در مرحله اول براي طول موج مركزي و دامنه پمپهاي

( گوسي يك حدس ميزنيم بنابراين با توجه به معادله ( 10

خواهيم داشت. Qi (L) ما شرط مرزي را براي مقادير نهايي

مجموع تعداد معادلههاي ( 9) و ( 10 ) و ( 15 ) و ( 16 ) كه در

2(Ns + Nm) واقع شرايط مرزي مسئله هستند برابر با

است. تعداد معادلههاي ديفرانسيل موجود هم برابر با

2 است. بنابراين تعداد معادلههاي ديفرانسيل و (Ns + Nm)

شرايط مرزي برابر بوده و معادلههاي ديفرانسيل مربوطه

قابل حل است. از آنجا كه بعضي از معادلهها داراي مقدار

اوليه، بعضي داراي مقدار نهايي و بعضي ديگر هم داراي

شرط مرزي مرتبط با شرايط مرزي معادلههاي ديگر هستند

براي حل اين دستگاه معادله ديفرانسيل از روش تفاضل

محدود استفاده ميشود.

2 است. اين Np تعداد معادلههاي ( 17 ) و ( 18 ) برابر با

معادلهها به صورت يك دستگاه معادلات غير خطي جبري

هستند. بعد از حل معادلههاي ديفرانسيل از الگوريتم نيوتن

رافسون استفاده ميشود و با توجه به مقادير بدست آمده

ميتوان مقادير طول موجهاي مركزي و دامنه ،βi (L) براي

پمپهاي گوسي را براي گام دوم بدست آورد. با اين مقادير

بدست آمده دستگاه معادله ديفرانسيل را دوباره حل مي-

كنيم. آن قدر اين كار را ادامه ميدهيم تا

( ) i L βها ، k E ، ها k

λ هاي بدست آمده از حل Qi (L) ها و

معادلههاي ديفرانسيل در روابط ( 17 ) و ( 18 ) صدق كنند.

در انجام محاسبات تعداد سيگنالها را 40 ، تعداد نمونههاي

تعداد پمپهاي گوسي را 4 پمپ ، (Nm = 60) پمپ را 60

-1575 nm و پهناي باند سيگنالها را از ( Np = 4 )

1525 انتخاب ميكنيم. مقدار پهناي پمپهاي گوسي nm

10 است. nm در نيمه بيشينه با هم مساوي بوده و برابر با

در شكل 3 طيف بهره بدست آمده بعد از بهينه سازي، نشان

داده شده است در اين شكل متوسط بهره بدست

0/25 است. dB 4/1 و اعوجاج بهره برابر با dB آمده

1520 1530 1540 1550 1560 1570 1580 1590

2.5

3

3.5

4

4.5

signal wavelength,nm

signal gain,dB

شكل 3 : طيف بهره سيگنال بدست آمده بر حسب طول موج

5

در شكل 4 منحني تغيير توان سيگنالها در طول فيبر نشان

داده شده است. در اين شكل تقويت سيگنالها و در جهت

عكس بودن پمپها به وضوح قابل مشاهده است.

0

10

20

30

1520

1540

1560

1580

0.05

0.1

0.15

0.2

0.25

0.3

signal wavelength,nm fiber length,km

signal power,mW

شكل 4 : تغيير توان سيگنالها بر حسب طول فيبر

در شكل 5 طيف توان ورودي پمپ بهينه شده نشان داده

شده است. طول موج مركزي پمپهاي گوسي برابر با

1484 نانومتر است. / 1460/2 و 3 ، 1438/2 ،1421/6

1400 1420 1440 1460 1480 1500

0

0.2

0.4

0.6

0.8

1

pump power,nm

pump power,arb.units

شكل 5 : طيف توان پمپ ورودي

656 است. روش mW در شكل 5 توان كل پمپ ورودي برابر

وردشي در كارهاي قبلي براي بهينه سازي تقويت كننده-

هاي رامان پمپ شده به وسيله پمپهاي تكفام[ 13 ] و

پمپهاي پيوسته با برازش گوسي مورد استفاده قرار گرفته

است[ 10 ]. همان گونه كه در اين مقاله نشان داده شد براي

بهينه سازي تقويت كنندههاي رامان پمپ شده به وسيله

پمپهاي گوسي نيز ميتوان از اين روش استفاده كرد ولي

بر خلاف روشهاي قبلي، از ابتدا فرض ميشود كه شكل

پمپ گوسي باشد ولي فركانسهاي مركزي و توان پمپها به

منظور كمينه سازي اعوجاج بهره بر اساس روش وردشي

تعيين ميشود و در اينجا از هيچ گونه برازش گوسي

استفاده نميشود. اشكال برازش گوسي اين است كه ممكن

است پهنا در نيمه بيشينه پمپهاي گوسي بدست آمده از

.[ مقدار آن در ليزرهاي نيمه هادي معمولي كمتر باشد[ 10

همچنين در اين روش ميتوان قيدهاي مختلفي روي توان

پمپها، مجموع توان پمپها و بهره متوسط تقويت كننده

اعمال كرد.

-4 نتيجه گيري

وردش روشي دقيق و كارامد در بهينه سازي تقويت كننده-

هاي رامان با هرگونه قيد است. در اين مقاله نشان داده شد

كه از اين روش ميتوان براي پيدا كردن شكل و مشخصهي

پمپهاي گوسي لازم براي كمينه كردن اعوجاج بهره

استفاده كرد. نه تنها براي پيدا كردن شكل پمپهاي گوسي

بلكه براي پيدا كردن پمپهايي با هرگونه شكل دلخواه مي-

توان از اين روش استفاده نمود. در اين روش هيچ گونه

محدوديتي براي اعمال هر گونه قيد دلخواه بر روي مجموع

توان پمپ ها و متوسط بهره تقويت كننده وجود ندارد.

.

 

+ نوشته شده در ساعت توسط ... |

الگوریتمهاي ژنتیکاز

چکیده - الگوریتمهاي ژنتیکاز اصول انتخاب طبیعی داروین براي یافتن فرمول بهینه جهت پیشبینی یا تطبیق الگو استفاده میکند.

الگوریتمهاي ژنتیک اغلب گزینه خوبی براي تکنیکهاي پیشبینی برمبناي رگرسیون هستند. مختصراً گفته میشود که الگوریتم

یکتکنیک برنامهنویسی است که از تکامل ژنتیکی به عنوان یکالگوي حل مسئله استفاده میکند. در این مقاله سعی (GA ژنتیک(یا

بر آن داریم تا یک نقویت کننده قدرت را بررسی نموده و با استفاده از ژنتیک الگوریتم پارامترهاي بهینه را براي مدار یافته و اقدام به

بررسی نتایج بدست آمده نماییم

کلید واژه- ژنتیکالگوریتم ، تقویتکننده قدرت ، معایبژنتیکالگوریتم ، معایبژنتیکالگوریتم

-1 مقدمه

قانون انتخاب طبیعی بدین صورت است که تنها گونههایی از

یک جمعیت ادامه نسل میدهند که بهترین خصوصیات را

داشته باشند و آنهایی که این خصوصیات را نداشته باشند به

تدریج و در طی زمان از بین میروند. البته درستتر آنست

که بگوییم طبیعت مناسب ترینها را انتخاب میکند نه

بهترینها. گاهی در طبیعت گونههاي متکاملتري به وجود

میآیند که نمیتوان گفت صرفا حاصل تکامل تدریجی گونه

قبلی هستند. آنچه که ممکن است تا حدي علت این رویداد

را توضیح دهد مفهومیست به نام : تصادف یا جهش. بر

اساس این ویژگیها ژنتیک الگوریتم سعی بر آن داریم تا

یک تقویت کننده قدرت را مورد بررسی و بهینه سازي قرار

دهیم. لذا در ابتدا ساختار یک تقویت کننده قدرت را بررسی

و با توجه به نیازها کامل و سپس مقادیر برخی از

پارامترهاي آن را با ژنتیک الگوریتم بدست آورده و در

نهایت با استفاده از مقادیر بدست آمده و اعمال آن مقادیر به

مدل ، نتایج را بررسی مینماییم لذا این مقاله را به 3 بخش

کلی تقسیم میکنیم :

بخشاول : توضیحی بر ژنتیک الگوریتم

بخش دوم : بررسی یک تقویت کننده قدرت و بدست

آوردن تابع تبدیل سیستم

بخش سوم : بررسی و تحلیل پارامترهاي مورد نظر توسط

ژنتیک الگوریتم

-2 توضیحی بر ژنتیکالگوریتم

یک تکنیک جستجو در علم GA الگوریتم ژنتیک

کامپیوتربراي یافتن راه حل بهینه و مسائل جستجو

است.الگوریتم هاي ژنتیک یکی از انواع الگوریتم هاي

تکاملی اند که از علم زیست شناسی مثل وراثت،

جهش،انتخاب ناگهانی ، انتخاب طبیعی و ترکیب الهام

[ گرفته شده .[ 2

در دهه هفتاد میلادي دانشمندي به نام جان هلند ایده

استفاده از الگوریتم ژنتیک را در بهینهسازيهاي مهندسی

مطرح کرد. ایده اساسی این الگوریتم انتقال خصوصیات

موروثی توسط ژنهاست. فرض کنید مجموعه خصوصیات

انسان توسط کروموزومهاي اول به نسل بعدي منتقل

میشوند. هر ژن در این کروموزومها نماینده یک خصوصیت

است که بصورت همزمان دو اتفاق براي کروموزومها میافتد.

است. موتاسیون به این (Mutation) اتفاق اول موتاسیون

صورت است که بعضی ژنها بصورت کاملا تصادفی تغییر

میکنند(البته تعداد این گونه ژنها بسیار کم میباشد).

اتفاق دیگر چسبیدن ابتداي یک کروموزوم به انتهاي یک

Crossover کروموزوم دیگر است؛ این مساله با نام

شناخته میشود(البته این اتفاق به تعداد بسیار بیشتري

نسبت به موتاسیون رخ میدهد). این همان چیزیست که

مثلا باعث میشود تا فرزند تعدادي از خصوصیات پدر و

تعدادي از خصوصیات مادر را با هم به ارث ببرد و از شبیه

[ شدن تام فرزند به تنها یکی از والدین جلوگیري میکند.[ 1

بررسی و بهینه سازي تقویتکننده قدرت با استفاده از ژنتیکالگوریتم

( هادي آریاکیا( 1) ، مسعود مصدق( 1) ، ابراهیمی راد ( 2

1) دانشگاه آزاد اسلامی واحد تهران مرکز ( 2) دانشگاه تهران دانشکده برق و کامپیوتر )

2

X1,X2,…,Xn، در ابتدا تعداد مشخصی از ورودي ها

هستند را انتخاب می کنیم و X که متعلق به فضاي نمونه

نمایش X=(x1,x2,…xn) آنها را در یک عدد برداي

میدهیم ؛ در مهندسی نرم افزار اصطلاحاً به آنها ارگانیسم یا

Colony کروموزوم گفته می شود و به گروه کروموزوم ها

رشد می کند Colony ، یا جمعیت می گوییم.در هر دوره

و بر اساس قوانین مشخصی که حاکی از تکامل زیستی است

تکامل می یابد.

ما یک ارزش تناسب ، Xi براي هر کروموزوم

هم مینامیم.عناصر قویتر f(Xi) داریمکه آن را (Fitness)

Colony یا کروموزوم هایی که ارزش تناسب آنها به بهینه

نزدیکتر است شانس بیشتري براي زنده ماندن در طول دوره

هاي دیگر و دوباره تولید شدن را دارند و ضعیفترها محکوم

به نابودي اند. به عبارت دیگر الگوریتم ورودي هایی که به

جواب بهینه نزدیکترندرانگه داشته واز بقیه صرف نظر می

کند.

یک گام مهم دیگر درالگوریتم،تولد است که در هر دوره

یکبار اتفاق می افتد. محتویات دو کروموزومی که در فرآیند

تولید شرکت می کنند با هم ترکیب میشوند تا 2 کروموزوم

جدید که ما انها را فرزند می نامیم ایجاد کنند.این

هیوریستیک به ما اجازه می دهد تا 2 تا از بهترین ها را

براي ایجاد یکی بهتر از آنها با هم ترکیب کنیم.مانند آنچه

به علاوه در (evolution) . در شکل 1 مشخص میباشد

طول هر دوره،یک سري از کروموزوم ها ممکن است جهش

. [4](Mutation) یابند

قرار X=(x1,x2,..,xn) در یک عدد برداري x هر ورودي

دارد .براي اجراي الگوریتم ژنتیک مان باید هر ورودي را به

یک کروموزوم تبدیل کنیم.می توانیم این را با داشتن

انجام دهیم Xi بیت براي هر عنصرو تبدیل ارزش log(n)

مثل شکل 2

می توانیم از هر روش کد کردن براي اعداد استفاده

را به صورت X کنیم.در دوره 0، یک دسته از ورودي هاي

ام ما ارزش i تصادفی انتخاب می کنیم. بعد براي هر دوره

را تولید،تغییر وانتخاب را اعمال می Fitness مقدار

کنیم.الگوریتم وقتی پایان می یابد که به معیارمان برسیم.

عموما رًاه حلها به صورت 2 تایی 0و 1 نشان داده می

شوند ولی روشهاي نمایش دیگري هم وجود دارد.تکامل از

یک مجموعه کاملاً تصادفی از موجودیت ها شروع می شود

و در نسلهاي بعدي تکرار می شود.در هر نسل،مناسبترین ها

و نه بهترین ها انتخاب می شوند.

روش هاي مختلفی براي الگوریتم هاي ژنتیک وجود دارند

که می توان براي انتخاب ژنوم ها از آنها استفاده کرد.اما

روش هاي لیست شده در پایین از معمولترین روش ها

هستند.

مناسبترین عضو هر اجتماع انتخاب می : Elitist انتخاب

شود.

یک روش انتخاب است که در آن : Roulette انتخاب

عنصري که عدد برازش(تناسب)بیشتري داشته باشد،انتخاب

می شود.

به موازات افزایش متوسط عدد برازش : Scaling انتخاب

جامعه،سنگینی انتخاب هم بیشتر می شودوجزئی تر.این

روش وقتی کاربرد دارد که مجموعه داراي عناصري باشد که

عدد برازش بزرگی دارند وفقط تفاوت هاي کوچکی آنها را از

هم تفکیک می کند.

یک زیر مجموعه از صفات یک : Tournament انتخاب

جامعه انتخاب می شوندواعضاي آن مجموعه با هم رقابت

می کنندو سرانجام فقط یک صفت از هر زیر گروه براي

تولید انتخاب می شوند.

Rank Selection, : بعضی از روشهاي دیگر عبارتند از

Generational Selection, Steady-State

Selection .Hierarchical Selection

2 کروموزوم براي معاوضه Crossover در روش

سگمنتهاي کدشان انتخاب می شوند.این فرآیند بر اساس

فرآیند ترکیب کروموزوم ها در طول تولید مثل در موجودات

زنده شبیه سازي شده. اغلب روش هاي معمول

3

هستند Single-point Crossover شامل Crossover

، که نقطه تعویضدر جایی تصادفی بین ژنوم ها است.بخش

اول قبل از نقطه ،و بخش دوم سگمنت بعد از آن ادامه پیدا

50/ می کند،که هر قسمت برگرفته از یک والد است،که 50

انتخاب شده.

تصویر 3 تاثیر هر یک از عملگر هاي ژنتیک را روي

کروموزوم هاي 8 بیتی نشان می دهد.ردیف بالاتر 2 ژنوم را

نشان می دهد که نقطه تعویض بین 5امین و 6امین مکان

در ژنوم قرار گرفته،ایجاد یک ژنوم جدید از پیوند این 2 والد

بدست می آیند.شکل 2وم ژنومی را نشان می دهد که دچار

جهششده و 0 در آن مکان به 1 تبدیل شده .

-3 بررسی یکتقویت کننده قدرت و بدست

آوردن تابع تبدیل سیستم :

تقویت کنندهاي که در ابتدا میخواهیم به بررسی آن

بپردازیم، داراي شماي کلی بصورت شکل 4 میباشد. البته

در انتهاي این بخش یک کنترل کننده پیش فاز در مسیر

فیدبک استفاده خواهیم کرد.با پیش فرضها زیر کار تحیل و

بدست آورد تابع تبدیل سیستم را شروع میکنیم :

مشخصات تقویت کننده عملیاتی :

A1= بهره حلقه باز : 106

f1=10Hz : فرکانس قطع پایین

f2=1MHz : فرکانس قطع بالا

2.5 V : افت ولتاژ

مشخصات تقویت کنندة ترانزیستوري :

A1= بهره حلقه باز : 10

f2=10KHz : فرکانس قطع بالا

1.5 V : افت ولتاژ

توابع تبدیل حلقه باز و حلقه بسته در تصویر 5 مشخص می

باشند

مدل سیمولینک تقویت کننده بصورت شکل 6 میباشد :

T1=1/2Πf1 T2=1/2Πf که در آن 2

T3=1/2Πf3 B1=0.01

با وارد کردن تابع تبدیل نمودار بود و مارجین را براي این

مدار حساب مینماییم که در تصویر 7 و 8 رسم شده است.

در ضمیمه ampli1.m فایل مربوطه با نام m برنامه

موجود این مقاله موجود میباشد.

4

Gain margin in dB=1.011

Corresponding Frequency=100147.9888

Phase margin n degrees=0.064197

Corresponding Frequency=99583.2111

همانطور که مشاهده میفرمایید حاشیه بهره و فرکانس بر

100 و KHz هم افتادگی بهره به ترتیب 1.011 و

99KHz همچنین حاشیه فاز و فرکانس برهم افتادگی فاز

میباشد و سیستم در آستانه ناپایداري است.

اینبار سعی میکنیم با استفاده از فیدبک در بخش تقویت

کننده ترانزیستوري پایداري را افزایش دهیم. شکل مدار ،

نمودار بلوکی و مدل سیمولینک مدار را در تصویر 9

میتوانید مشاهده نمایید

براي بدست آوردن تابع تبدیل کلی سیستم و نمودارهاي

استفاده شده است که در بخش Ampli2.m بود از فایل

ضمیمه موجود میباشد. نتایج بدست آمده و نمودارهاي بود

و پاسخ پله را میتوانید در تصویر 10 مشاهده نمایید

همانطور که از نمودارتصویر 10 و نتایج بدست آمده از قبل

مشخص میباشد، حاشیه فاز و فرکانس بر هم افتادگی بهره

99.6 به 18.9 درجه و 95.3 KHz از 0.06 درجه و

رسیده است. هر چند که این بهبود مطلوب ما میبود KHz

اما بازهم مقادیر قابل اطمینانی براي یک تقویت کننده

نمیباشد

راه دیگري که معمولا براي افزایش فاز سیستمها مورد

استفاده قرار میگیرد، استفاده از کنترل کننده پس فاز

است. اما از آنجا که کنترل کننده پس فاز تاخیري را بر

سیستم تحمیل میکند، پاسخ سیستم را کند مینماید که

باعث ایجاد نوسانهاي شدید در خروجی تقویت کننده

میشود.

لذا یک کنترل کننده پیشفاز استفاده گردید و چون کنترل

کننده پیش فاز رفتاري شبیه مشتقگیر دارد بهره و سرعت

پاسخ سیستم را افزایشمیدهد. در شکل 11 مدار با فیدبک

پیشفاز و همچنین مدل سیمولینک آنرا مشاهده مینمایید

-4 بررسی و تحلیل پارامترهاي مورد نظر توسط

ژنتیکالگوریتم

genetic در برنامه amplifire براي پیاده سازي این

در مطلب ابتدا تابع تبدیل کل را محاسبه algorithms

میکنیم

5

همانجور که در تصویر 12 مشاهده مینمایید براي محاسبه

ابتدا باید تابع تبدیل داخلیترین لوپ را محاسبه کنیم براي

1 G1G از فرمول 2 C محاسبه فیدبک منفی 1

G1

+

بدست

میآید

k میباشد و 1 A1=10 , T3=1.59*10- که در آن 5

مقدار گین ماست که مجهول است و میخواهیم از طریق

مقدار بهینه آن را بدست Genetic_Algorithms

آوریم.

را که C و 3 C مطابق تصویر 13 مقدار تابع تبدیلهاي 2

A1=1*106 , T1= 0.0159 , T در آن = 2

1.59 میباشد را محاسبه *10-7 , T=2.2*10-5

میکنیم:

تابع تبدیل کلی است صورت و خرج آن را به صورت C4

تعرف کرده و در den و num ماتریس لاپلاس با نامهاي

استفاده Genetic Algorithm در peacksfcn.m

داریم که K و 2 k میکنیم در این جا ما دو مجهول 1

بهترین Genetic Algorithm میخواهیم از طریق برنامه

را به peachsfcn.m مقدار براي این گینها بدست آوریم

اینشکل تعریف میکنیم :

k1 = input(1); k2 = input(2);

z=0;

t=0:0.01:2;

num=[2.48776e20*(k2+45454.5)];

den=[k2+45454.5,628931*(k2+45454.5)*(k

1+10.1001),3.95558e12*(k2*(k1+6.28925e

7)+45454.5*(k1+0.1001)),2.48776e14*

(k2*(k1+4.54545e10)+45454.5*(k1+0.1))];

y=step(num,den,t);

sy=size(y);

for i=1:sy

z=z+(y(i)-(100))^-2;

end

براي بدست آوردن مقدار بهینه باید خطاي ما کمترین مقدار

را داشته باشد براي بدست آوردن خطا از فرمول

6

( ) Σi

out e y i y 2 Fitness _[ ] استفاده میکنیم که همان

مقدار بهینهاي است که ye ، ما میباشد Function

خروجی باید به آن برسد.

تعداد نسلها و جمعیت و ژنها را بهترتیب 100،100 و 32

در بهترین حالت Gnetic Algorithm قرار میدهیم تا

را بدست آورد وهمچنین k و 2 k مقدار بهینه گینهاي 1

Gnetic در var_n= تعداد متغیرهایمان را با مقدار دهی 2

تعیین میکنیم Algorithm

محدوده متغیرهاي خودمان را clab.m وهمچنین در

تعریف میکنیمکه در اینجا ما هردو متغیر را از 0,0001 تا 2

تعریف میکنیم

برنامه را اجرا میکنیم F با زدن کلید 5 clab حال در

را از طریق این برنامه k و 2 k و مقدار بهینه 1 (Run)

بدست میآوریم که نتیجه را در تصویر 14 میتوانید

مشاهده نمایید.

و K1= در نسل صدم مقدار 1.994361

بدستآمد. K2=0.009998

Tools/ControlDesign/LinearAnalysis.. ازمنوي

را انتخابکرده و اجرا میکنیم همانطورکه در شکل زیر

میبینید خروجی به مقدار بهینه خو یعنی 100 در مدت

1.5 رسیده است پس از طریق *10-4sec

با بدست آوردن مقدار بهینه گین Gnetic_Algorithm

فیدبک ها در کمترین مدت به بهترین جواب رسیده

همچنین گین کلی مدار هم برابر 100 گردیده است.

-5 نتیجهگیري

از محاسن ذاتی ژنتیک الگوریتم موازي بودن آن مي باشد .

اکثر الگوريتم هاي ديگر موازي نيستند و فقط مي توانند

فضاي مسئله مورد نظر را در يک جهت و يک لحظه

جستجو کنند و جواب پيدا شده ممكن است جواب بهينه

محلي باشد و يا زير مجموعه اي از جواب اصلي باشد. حسن

ديگر ژنتيك الگوريتم كه در اينجا از آن سود جستيم امكان

تغيير چندين پارامتر بشكل همزمان بود. در مسائل واقعي

نمي توان محدود به يک ويژگي شد تا آن ويژگي ماکسيمم

شود يا مينيمم و بايد چند جنبه در نظر گرفته شود و به

همین خاطر از طریق ژنتیک الگوریتم توانستیم به پاسخ

مطلوب دست پیدا کرده علاوه رسیدن به پاسخ پله مناسب

که نمودار آنرا در تصویر 15 مشاهده مینمایید به گین 100

که مطلوبمان بود دست پیدا نماییم. ناگفته نماند ژنتيك

الگوريتم معايبي هم دارد يک مشکل چگونگي نوشتن عملگر

است که منجر به بهترين راه حل براي مسئله Fitness

شود. مشکل ديگر ،که آن را نارس مي ناميم اين است که

اگر يک ژنوم که فاصله اش با ساير ژنوم هاي نسل اش زياد

باشد(خيلي بهتر از بقيه باشد) و خيلي زود ايجاد ممکن

است محدوديت ايجاد کند و راه حل را به سوي جواب بهينه

محلي سوق دهد. متاسفانه در اینصورت تمام زمانی که براي

محاسبات صرف شده بیهوده بوده و محاسبات میبایستی از

ابتدا شروع گردد.

+ نوشته شده در ساعت توسط ... |

اپ امپ op amp

1 Op-amp) Operational Amplifier تقويت كننده هاي عملياتي

اهداف:

op-Amp بررسي مدار هاي تقويت كننده با استفاده از

op-Amp حل و شبيه سازي معادلات ديفرانسيل با استفاده از

2

مقدمه:

در بحثهاي گذشته محور بررسيهاي خود را بر روي ترانزيستور و مدارهاي ترانزيستوري معطوف

كرديم. ديديم كه مهمترين موضوع براي يك مدار ترانزيستوري تعيين بهترين نقطه كار است. از اين

رو تمام سعي مان را بر اين گذاشتيم كه بسته به نوع آرايش مدار آن را تحليل كنيم و از آنجا به جايي

رسيديم كه مدل هاي ساده اي را براي ترانزيستور مطرح كرديم تا بتوانيم نقش اين عنصر را در مدار

بهتر درك كنيم. همچنين پيكربندي هاي مختلفي را از آن در چند نمونه از مدارهاي پركابرد بررسي

كرديم و در نهايت مدارهاي تقويت كننده چند طبقه را نيز مورد بررسي قرار داديم. ديديم كه اين

مدارهاي چند طبقه كه حاوي چندين ترانزيستور بودند در اصل از كنار هم قرار دادن مدارهاي ساده

تك ترانزيستوري ساخته شدند. و نكته اي كه در آنها نهفته بود، اين بود كه طبقه ها را طوري پشت

سر هم ببنديم كه امپدانسهاي هر طبقه با طبقه بعدي يكسان باشد از اين رو لازم بود كه نقطه كار هر

ترانزيستور را دقيقًا تعيين كنيم. چرا كه مي دانيم كه نقطه كار، براي تعيين پارامترهاي يك مدار

ترانزيستوري، كليدي ترين نقش را بازي مي كند. از اين رو وقتي كه تعداد طبقات زياد مي شود

تعيين درست نقطه كار براي تك تك ترانزيستورها كاري زمان بر است و در برخي موارد هم بسيار

سخت مي شود.

حال اين سئوال مطرح مي شود كه آيا مي توان مداري را با توجه به ويژگيهاي گفته شده ساخت كه

در آن يك بار براي هميشه نقطه كار تمام ترانزيستورها را تعيين كرده باشيم و در ضمن بتوانيم از آن

در كاربردهاي متفاوتي نيز استفاده كنيم به شرطي كه ديگر به داخل آن كاري نداشته باشيم و فقط

داراي يك يا چند ورودي و خروجي باشد. همچنين قابليت اين را نيز داشته باشد كه در آرايشهاي

متفاوتي از آن استفاده كنيم؟

جستجو در جواب دادن به اين سئوال ما را به مداري مي رساند كه به صورت مجتمع ساخته شده

باشد و هر وقت كه خواستيم به راحتي به آن وروديهاي مناسب را اعمال كنيم و متناسب با آنها

خروجي تقويت شده اي را به دست آوريم.

در اين بحث مي خواهيم به نحوه كار كردن چنين مداري بپردازيم. البته قصدمان اين نيست كه داخل

آنها را به صورت جزئي بررسي كنيم بلكه مي خواهيم نحوه تقويت كردن سيگنالهاي كوچك را از

ديدگاه خارجي مدار مجتمع بررسي كنيم، يعني بدون اينكه از داخل مدار مجتمع اطلاعاتي داشته

باشيم به تحليل كل مداري بپردازيم كه اين مدار مجتمع داخل آن است.

3

مي ناميم و به Operational amplifier از اين به بعد اين مدار مجتمع را تقويت كننده عملياتي

مي گوييم. Op-amp اختصار به آن

ها يكي از عناصر مداري هستند كه مي توان با op – amp : از ديدگاه فيزيكي Op- amp

بستن يك سري مقاومت و خازن و ساير قطعات مناسب به آن، مداري را مطابق با نيازمان با دقت بالا

طراحي كرد كه كار كردن و تحليل آن اگر ماهيت اين قطعه را شناخته باشيم بسيار آسان خواهد بود.

مي دانيم كه يك تقويت كننده ايده آل بايد مقاومت ورودي بي نهايت و در عوض مقاومت خروجي

صفر داشته باشد كه اين موضوع در مدارهاي تقويت كننده چند طبقه بسيار حائز اهميت است. از اين

رو اگر قرار باشد يك تقويت كننده داشته باشيم حتمًا اين ويژگيها را بايد دارا باشد. بنابراين مي توانيم

تصويري را كه در ذهنمان داريم را در شكل ۱ مشاهده كنيم كه در آن مشخصه يك تقويت كننده

چند طبقه ايده آل نشان داده شده است.

Ri Advi

Ro

Ro

Ri

8

0

Vi Vo

شكل ( ۱) تقويت كننده چند طبقه ايده آل

همان ولتاژ تقويت شده اي است AVi ضريب تقويت مدار مي باشد و A در شكل بالا مي دانيم كه

براي يك تقويت كننده ايده آل به سمت بي نهايت ميل A . كه در خروجي از مدارمان انتظار داريم

مي كند.

كه ادعا مي كنيم يك تقويت op-amp حال با توجه به شكل بالا مي خواهيم شهودي از درون يك

كننده ايده آل است، به دست آوريم. از اين رو لازم است كه ابتدا تعداد طبقات به كار رفته شده در

معمولي را بدانيم كه آن را در شكل زير نمايش مي دهيم: op – amp

op – amp شكل ( ۲) طبقات به كار رفته شده در

را بررسي كنيم پس تنها به توضيحات ساده اي از op – amp در اينجا چون قصد نداريم كه داخل

را مشاهده op – amp هر طبقه مي پردازيم و در ضميمه اين فصل مي توانيد مدار واقعي داخل يك

كنيد و تا حدودي هم به بررسي داخل آن در قسمت ضميمه خواهيم پرداخت.

4

همانطور كه در شكل ۲ مي بينيم طبقه اول يك تقويت كننده تفاضلي است. اين  حسن انتخاب ما را

قادر مي سازد تا بتوانيم همزمان دو ورودي مجزا به مدار بدهيم. از اين رو انعطاف پذيري بيشتري را

به مدارمان داده ايم. علاوه بر آن امپدانس ورودي بالايي را نيز براي مدارمان قادر خواهيم بود به

استفاده مي كنيم. diff- amp ارمغان بياوريم. بنابراين در طبقه اول از يك

در طبقه دوم نياز به تقويت بسيار زيادي داريم كه بدين منظور از چندين ترانزيستور در آرايش اميتر

مشترك استفاده مي كنيم و طبقه دوم را كه خود حاوي چند طبقه ساده اميتر مشترك است را

مي سازيم.

در ادامه، طبقه تغيير دهنده سطح را قرار مي دهيم كه در اين طبقه اين طور قرارداد مي كنيم كه هرگاه

شد يا به عبارتي ورودي تقويت كننده صفر شد، در خروجي نيز ولتاژ صفر را مشاهده كنيم. Vi=0

همچنين در اين تقويت كننده از هيچ نوع خازني نبايد استفاده كرد؛ چرا كه قابليت اين را هم داشته

نيز از آن بهره برد. DC باشد

+ نوشته شده در ساعت توسط ... |

what is mmc

Modernising Medical Careers (MMC) aims to ensure that more patients are treated by fully trained

doctors, rather than doctors in training. The new career structure and training programmes will give

doctors a clear career path where advancement is attained through the acquisition of set

competences rather than time spent in a particular role. It will improve patient safety by ensuring

junior doctors in their early years of training are well supervised and assessed against explicit

standards set out in curricula for each specialty.

Transitional arrangements from the current system to the new system are being finalised. The new

training programmes will start from August 2007. The Postgraduate Medical Education and

Training Board (PMETB) is considering the new specialty curricula, all of which have now been

submitted and many of which are already approved. They have indicated that the process will be

completed by early 2007. Also being finalised are:

■ the proposed number of posts available for recruitment into specialty and general

practice training. Current indications are that 2007 will provide an enormous opportunity for

doctors to compete for entry into training – probably the best ever.

■ new recruitment arrangements supported by an electronic application portal (MTAS)

which will be far more efficient for applicants and for the service, resulting in a transparent

and cost-effective process.

MMC is perhaps the most fundamental change to medical training since the NHS came into being,

with major implications for how clinical services are delivered. Doctors will be trained to explicit

national standards determined by a new statutory body, the Postgraduate Medical Education and

Training Board (PMETB). Patient services will be delivered by more fully-trained doctors which will

improve patient safety and care; and patients, junior doctors and employers will understand what

they can expect and what is expected of them. The time is right, preparation is in the final stages

and arrangements will be ready for the planned launch of the new specialty training programmes

from August 2007.

Specialist and GP training

programmes

(Run-through training)

Medical school – 4-6 years

Career

posts

F1

F2

UK MMC Career Framework

Senior Medical AAppppooiinnttmmeennttss

Continuing Professional Development

Fixed term

specialist

training

(FTSTA)

Arrows indicate competitive entry

CCT route Specialist and GP Registers Article 14/11 route

Existing

Training &

Non-training

Posts

Some specialities will have common training to start with. You will be able to apply for the following:

Acute Care Common Stem (ACCS)

Anaesthesia

Basic Neurosciences Training (BNT)

Chemical Pathology

General Practice

Histopathology

Medical Microbiology

Medicine in General

O&G

Ophthalmology

Oral & Maxillofacial Surgery (OMFS)

Paediatrics

Psychiatry

Public Health

Radiology

Surgery in General

Otolaryngology (ENT)

MMC – what will it mean for:

Medical students

In October, a new online application system for the Foundation Programme was launched. It allows

you to submit a single electronic application to any foundation school and programme in the UK.

The deadline for applications is 5 December 2006 and no applications will be processed after that

date.

Halfway through your F2 year, you will apply for a specialty/GP training programme or a fixed term

specialty training appointment. Having undertaken placements in a number of specialties during

foundation training, you must make a choice, based on your preferences and aptitudes, on what

geography and general specialty grouping you want to apply for. Careers advice will be available

through your foundation school. In Oxford their will be MedicCareers workshops held regularly in

each postgraduate centre (see below).

Foundation doctors (F2)

Foundation doctors will be able to apply for a run-through specialty or GP training programme

during their F2 year.

Applicants can apply for:

■ one specialty in four locations; or

■ four specialties in one location.

■Or two specialties in two locations

You will only be able to apply at the first year level of training in any specialty (ST1). Carefully read

the person specifications for ST1 and consider what direction you want your career to take.

Careers advice will be available to you through your clinical tutor or educational supervisor, and

you will have an opportunity to attend a MedicCareers workshop in your local postgraduate centre.

This 4 hour workshop takes you through the necessary steps to choose the best specialty for you

and gives you information about issues you need to be aware of when making your application. It

also provides a comprehensive workbook for future reference. If you have not been notified about

this workshop, contact your postgraduate centre manager.

Specialty tutors will also continue to be available to answer questions about individual specialties.

You can find their contact details under each specialty heading on the Oxford Deanery CDU

website (www.oxforddeanerycdu.org.uk ) You should think widely and flexibly about your career

options – consider a Plan A, B and C!

You will need to provide evidence that you have the competences set out in the Foundation

Programme curriculum, and it is likely that you will be asked to produce your portfolio during the

selection process to provide the evidence of any statements you made on the application form

about your competencies and also anything you said you have done to demonstrate an interest or

commitment to your chosen specialty. (See Portfolio below)

If you are an F2 doctor and are thinking about going to work abroad or take time out after you

complete your foundation training, you might wish to consider whether this is the best time to do

so. Since there should be many opportunities available for entry into specialty/general practice

training in 2007 (the transition year) you are likely to have the best chance of competing for a

national training number (NTN). You might want to apply for an NTN in 2007, start your specialty

training and then, in discussion with your training programme director, consider whether time out of

your training programme is something you want to pursue. Importantly, you will not be able to defer

the start of your training programme for the purpose of going abroad if you gain a place on a

programme starting in 2007.

Senior House Officers (SHOs)

Specialist Training

If you want to get into a specialty/GP training programme, you will need to apply in January 2007. It

is anticipated that there will be a significant number of entry points across a range of specialities

and locations.

Like foundation doctors, you can apply for one specialty in four locations; four specialties in one

location or two specialties in two locations.

The person specifications are now available on the MMC websites. You should consider at what

level you are eligible to apply for a specialty so look carefully at the different person specifications

for each level. Bear in mind that you will need to provide evidence of competences as indicated in

the appropriate person specification. You can find details of what is considered appropriate

evidence on the Royal College websites and you are likely to have to demonstrate this evidence in

your portfolio.

■ Entry to the SHO grade will end in July 2007. For SHOs who have employment

contracts extending beyond that date, contracts will be honoured, if the SHO so wishes.

However, if you are in this position, you are advised to apply for entry into specialty training

for 2007 since, during this transition year, there will be many more opportunities available

than at any other time. You do not have to resign your SHO contract if you decide to apply,

only if you are successful and decide to accept an offer into specialty training;

■ requests at short notice for time off for interviews should be honoured during the

recruitment period.

Fixed Term Specialist Training Appointments ( FTSTAs)

These are:

· Educationally approved posts for SHOs with <3 years exerience

· Only available in ST1 and ST2 posts ( except ST3 in paeds or psych)

· Apply for 1 year only

· Individuals can only apply for up to 2 yrs as a FTSTA altogether

· When the FTSTA is completed you can:

o Apply for ST2/3

o Apply for a career grade

o Consider an alternative training programme

· Doctors in this grade will also be known as StRs ( specialty registrars)

Career Grade

SHOs with more than 3 years experience who are not successful in obtaining a training post can

apply in the normal way to Acute Trusts for career grade posts. These are not educationally

approved posts.

Specialist Registrars (SpRs)

As a Specialist Registrar, you can continue your training as it is currently structured. However, you

might consider looking at your new curriculum (available from the relevant royal college website)

and discussing whether it would be advantageous for you to make the move to the new curriculum

with your training programme director.

■ Entry to the SpR grade will end in January 2007, but those in the grade will be able to

continue until they have finished their programme (subject to their progress); or apply to

switch to the new programme.

General Practitioner trainees

If you would like to apply for general practice, you will be able to apply for the full programme using

the new curriculum, or during the transition year (2007), it is likely that you will be able to apply at

the level of ST2 or ST3, using the existing curriculum.

The new programme for GP training (beginning at ST1 in 2007) will be based on the new

curriculum recently approved by PMETB and will be assessed by the new MRCGP examination.

Assessment for CCT – Transition Arrangements

From August 2007 it is proposed that there will be a single new assessment process for doctors

wishing to obtain a CCT (Certificate of Completion of Training) in general practice. This new

assessment will also be an essential requirement for entry to the GMC Generalist Register and

Membership of the Royal College of General Practitioners (MRCGP).

Details of this can be found on the RCGP website – www.rcgp.org.uk.

The exam will consist of three elements:

■ a knowledge test;

■ a clinical skills assessment which will include observed consultations using patient

simulators(OSCE)

■ work-based assessment carried out during the training placement.

If you are on a GP Vocational Training Scheme (VTS) you will be able continue your training as it is

currently structured. Entry to Vocational Training Schemes (VTS) will close and the “Do-it-yourself”

programmes will no longer be available. After the transition year, doctors will enter into the new

programme using the new curriculum.

To prepare for the changes:

· familiarise yourself with the main learning requirements of the new programme which

are best presented in the first curriculum statement “Being a GP” available at the

RCGP website as above; Also look at the national GP recruitment website or MMC site

and download the person specification and application details

(www.gprecruitment.org.uk)

· seek advice if need be from local GP trainers in your medical school, foundation

school and GP training programme. Every current GP VTS scheme in the Oxford

Deanery has a Course Organiser who will be the new Programme Director for general

practice based in each postgraduate centre. They will be well placed to advise you and

you can ask your postgraduate centre manager for contact details.

Staff and Associate Specialist (SAS) doctors

As an SAS doctor, you will be eligible to apply for entry to specialty training programmes and fixed

term specialist training programmes ( FTSAs) like all other applicants, provided you match the

requirements laid out in the relevant person specifications.

You will need to provide evidence that you have acquired the required competences. You can find

details of what is considered appropriate evidence on the medical royal college websites and this

will need to be part of your portfolio which you will need to produce during the selection process.

Research trainees

The Academic Subcommittee of the Modernising Medical Careers and UK Clinical Research

Collaboration aims to improve the academic career prospects for medically and dentally qualified

researchers and educationalists in the United Kingdom. The pocket guide for new academic

training pathways can be downloaded from the MMC website (see below).

For the transitional year (2007), research will continue to be a feature of the person specifications

for all levels. This is to ensure that if you are a current SHO who has made the career choice to

undertake research, you are not disadvantaged.

If you are in the middle of a research degree but meet the entry requirements for a specialty, you

are eligible to apply for a national training number (NTN) now and ask for a deferred start date.

Deferrals will be for up to 3 years from the time you registered for your degree.

There are research opportunities at:

■ foundation level through one- and two-year integrated academic programmes;

■ specialty level through the Academic Clinical Fellowship and Clinical Lecturer

programmes.

See the www.nccrcd.nhs.uk for more information about these opportunities.

Doctors in specialty training can also take ‘time out’ to undertake research, but this should be in

order to pursue a formal research qualification (e.g. MD or PhD). If you are considering an

academic research career, you should seek advice from your postgraduate dean.

Less than full time trainees

The introduction of MMC will not interfere with current arrangements for less than full time training

in any way. Indeed, it may be that the duration of time a flexible trainee will spend in training will

shorten overall as the full impact of the new competence based curricula is realised.

If you need to work less than full time and have good reasons to do so, you will need to have a

discussion with your postgraduate dean or his representative to ensure that you are eligible.

Decisions will be made on a case by case basis, but acceptable reasons include: disability or illhealth;

caring for an ill/disabled partner, relative or other dependent; or childcare. Please note that

doctors must undertake training on at least a half-time basis in order to comply with the

requirements of the European Specialist Qualification Order (1995).

If you intend to apply to specialty/GP training in the 2007 application rounds, you will need to

compete for entry in the normal way and, if successful, discuss your requirement to train less than

full time with the deanery responsible for the programme. Current flexible training arrangements

will not be transferred automatically to a new programme.

If you need to train less than full time and are planning to apply in the 2007 application process,

you:

■ will need to confirm with your local deanery that you are eligible (see below)

■ will need to indicate your wish to train less than full time on your application form, but

this will be “protected” information and will not be seen by anyone involved in the selection

process;

■ will need to discuss the details of your training needs with the relevant deanery, if you

are selected

■ you may be offered the opportunity to slot share, or to occupy a full time post, but take

less than full time training through it; or

■ you may be offered a separately funded flexible training post.

These arrangements will vary within deaneries and in some cases, you may have to wait for a

training placement to become available. In Oxford the Associate Dean in charge of flexible training

is Dr. Barbara Thornley and she can be contacted on bthornley@oxford-pgmde.co.uk

Trainees who want out-of-programme experience

You will still be able to take time out of your run-through training programme, as long as you have

the prospective agreement of the postgraduate dean and programme director. This is arranged on

a person-by-person basis. The rules about out-of programme experience are unlikely to change

significantly from current arrangements, although the basis on which competences acquired

outside of UK training programmes will be assessed towards the award of a CCT remains an issue

for PMETB guidance.

Non-UK trained doctors

If you have not undertaken your early training in the UK, you are still eligible to apply for

specialist/GP training at the appropriate level, indicated by the specialty person specification. If you

have not undertaken a UK Foundation Programme, you will need to offer evidence to appointment

panels that you have acquired the foundation competences. There will be advice available on royal

college websites concerning the type of evidence which might be relevant and this should be

collected into a portfolio which will need to be produced during the selection process.

+ نوشته شده در ساعت توسط ... |

تر انسدكتور- دستگاه توان

تقويت كننده برق - تراگذران برق - تر انسدكتور- دستگاه توان، ولتاژ يا جريان

افزا- تراديسنده

اين دستگاه يا ابزار شامل يك يا چند هسته فروآهنربايي هم راه با چند سيم

ميتواند توسط يك ولتاژ d.c. يا a.c. پيچ است كه با آن يك جريان يا ولتاژ

يا جريان مستقل، و با بهره گيري از پديده اشباع (سيرايي- سيري پذيري ) در

مدار آهنربايي، تغيير كند.

به انگليسي ) Tranducteur magnetiquc اصطلاح فرانسوي

را به Transducteur نبايد با اصطلاح فرانسوي متداول (Tranductor

اشتباه گرفته شود . كاربرد اصطلاح دومي به جاي (Transducer انگليسي

اولي هنگامي مجاز شمرده ميشود كه هيچگونه ابهام دو پهلويي وجود نداشته

با شد.

 سازن تقويت كننده (سازن= عنصر= المان)- سازن تراگذر- سازن مبدل القايي -

يكي از هست ه هاي دستگاه است كه همراه سيم پيچ هايش، پاره اي از

دستگاه تقويت كننده (تراگذر) را شكل ميدهد.

سيم پيچ تحريك -

سيم پيچ روي سازن (عنصر) تقويت كننده (تراگذر ) است كه توسط آن

دستگاه تحريك ميشود.

سيم پيچ توان (قدرت)

سيم پيچ سازن (عنصر) تقويت كننده دستگاه (تراگذر) است كه جريان بار از

آن ميگذرد.۵ سيم پيچ كنترل سيم پيچ تحريك است كه به وسيله آن توان برونداد (خروجي ) ازيك منبع

بيروني كنترل ميشود.

سيم پيچ باياس

سيم پيچ تحريك گذرا ننده، جرياني كه براي جا به جاسازي نقطه ميانين

كاركرد در روي مشخصه ايستا (استاتيك)، به كار ميرود.

سيم پيچ خود- تحريك (سازي)- سيم پيچ تحريك سرخود

سيم پيچ تحريكي است كه توسط آن پديده خود تحريك شوندگي پديدار

ميشود.

ابزار يا لامپ يكسو كننده خود - تحريك (ساز)- سوپاپ لامپي يكسو كننده

تحريك سرخود ( شير- سوپاپ- لامپ خلا)

ابزار يا لامپ يكسو كننده اي است كه به طور سري (ريسه بند ) به سيم پيچ

توان يك دستگاه تقويت كننده (تراگذر) وصل ميشود تا آن كه خود تحريك

شوندگي در آن پديدار شود.

- اصطلاحهاي مربوط به كميت هاي فيزيكي -

 ولتاژ برونداد- ولتاژ بار

ولتاژي است كه دريك مدار داراي يك دستگاه تقويت كننده (تراگذر ) به

امپدانس بار واگذار (تحويل) ميشود.

 ولتاژ جذب شده -

ولتاژي است كه توسط يك دستگاه تقويت كنند ه (تراگذر) در يك مدار جذب

ميشود.

 جريان كنترل- جريان فرمان -

جرياني است كه در يك سيم پيچ كنترل يك ابزار تقويت كننده (تراگذر )

جاري ميشود.

ولتاژ كنترل -ولتاژ دو سر پايانه هاي كنترلي يك ابزار تقويت كننده (تراگذر) است.

مشخصه ايستا (استاتيك)

(يك ابزار تقويت كننده)؛

منحني انتقال (واگذاري)

(يك ابزار تقويت كننده)

نمايش نموداري رابطه ميان يك كميت برونداد و يك كميت كنترل درشرايط

روند (حالت) ماندگار ميباشد.

نسبت ولتاژ

تقويت سازي ولتاژ

نسبت يك تغيي ر كوچك ولتاژ برونداد، در شرايط روند -ماندگار، به تغيير

متناظر (همنگر) در ولتاژ كنترل در بار و شرايط بهره برداري معين است.

نسبت جريان -تقويت سازي جريان

نسبت يك تغيير كوچك درجريان برونداد، در شرايط روند - ماندگار، به تغيير

متناظر (همنگر) جريان كنترل دربار و شرايط بهره برداري معين است.

تقويت سازي توان- توان افزايي -

نسبت يك تغيير كوچك در توان برونداد، در شرايط روند - ماندگار، به تغيير

متناظر (همنگر) توان كنترل، در بار و شرايط بهره برداري معين است.

ثابت زماني كل -

ثابت زماني پاسخ يك كميت برونداد يك دستگاه تقويت كننده (تراگذر)به يك

تغيير كوچك ناگهاني ولتاژ كنترل دربار و شرايط بهره برداري معين است.

ثابت زمان پس ماند- ثابت زماني پسماند -

ثابت زماني پاسخ يك كميت برونداد يك دستگاه تقويت كننده (تراگذر ) به

يك تغ يير كوچك ناگهاني جريان كنترل، دربار و شرايط بهره برداري معين

ا س ت.

ثابت زمان درونداد- ثابت زماني ورودي -اختلاف ميان ثابت زماني كل و ثابت زماني پس ماند است.

زمان پاسخ -

اين زمان از لحظه تغيير ناگهاني يك كميت كنترل تا تغيير متنا ظر (همنگر) آن

در يك كميت برونداد است كه به كسر معين ي از مقدار نهايي اش رسيده

با شد.

القاي اشباع (اشباعي)- القاي سيرايي- اندوكتانس سيري پذيري (اندوكتانس = مقاومت القايي)

مقاومت القايي يك سيم پيچ است كه متناظر (همنگر) با تغييرات يك شار

كوچك درمحدوده سيرايي (اشباع) منحني آهنربايي است.

راكتانس اشباع (اشباعي)- مقاومت واكنشي سيرايي -

مقاومت واكنشي متناظر (همنگر) با مقاومت القايي اشباع (سيرايي ) در بسامد

(فركانس) جريان متناوب، منبع توان است.

۱۵ عدد مزيت- رقم مزيت -

نسبت تقويت سازي توان (توان افزايي) به زمان پاسخ است.

+ نوشته شده در ساعت توسط ... |

خازن جامد و مزایای آن

چرا باید از خازن های جامد استفاده کرد؟   


    پلیمر رسانایی که در خازن های جامد استفاده شده است، کمک می کند که ویژگی های ممتاز زبر به دست آید:
      ای اس آر پایین در ناحیه فرکانس
      جریان با طول موج بلند
      طول عمر بیش تر
      توانایی تحمل دمای بالا
     

    ESR پایین در ناحیه فرکانس بالا - خنک کننده مادربرد
    مقاومت سری هم ارز (ESR) پایین تر به معنی انرژی برق کمتر است - خازن های جامد اساسا قادرند امپدانس پایین تری را در فرکانس بالا انتقال دهند. به دلیل این که امپدانس پایین تری وجود دارد، خازن های جامد پایدار تر هستند و حرارت کمتری نسبت به خازن های الکترولیتی تولید می کنند.

     

    تحمل جریان با طول موج بلند برای پایداری بیش تر مادربرد
    جریان با طول موج بلند، سوییچینگ برق را، که نقش تعیین کننده ای در فاز طراحی منبع تغذیه مادربرد دارد، بیشتر جذب می کند. خازن های جامد ظرفیت بهتری برای سوییچینگ برق دارند و بنابراین به شکل قابل ملاحظه ای به مقاومت مادربرد کمک می کند، در مقایسه با خازن های الکترولیتی.

     

    مادربردهای بادوام با طول عمر بیشتر
    در رابطه با طول عمر، خازن های جامد دوام بیشتری نسبت به خازن های الکترولیتی دارند، مخصوصا در شرایطی که کارشان کمتر است. همان طور که جدول زیر نشان می دهد، در دمای 65 درجه سانتی گراد، میانگین طول عمر برای خازن های جامد بیشتر از 6 برابر خازن های االکترولیتی است. بر مبنای سال، خازن های جامد حدود 23 سال دوام خواهند داشت، در حالی که خازن های الکترولیتی بعد از فقط 3 سال از بین می روند. بدیهی است که خازن های جامد طول عمر میانگین بالاتری نسبت به خازن های الکترولیتی دارند.
     

     
    خازن های جامد
    خازن های الکترولیتی
    95°C
    6,324 Hrs

     1.5 برابر بیش تر

    4,000 Hrs
    85°C
    20,000 Hrs

    2.5 برابر بیش تر

    8,000 Hrs
    75°C
    63,245 Hrs

    4 برابر بیش تر

    16,000 Hrs
    65°C
    200,000 Hrs

    6.25 برابر بیش تر

    32,000 Hrs


    توانایی تحمل دمای بالا - مادربرد با قابلیت اطمینان بالاتر
    ظرفیت خازن های جامد، در تغییر دمای شدید، ثابت می ماند. خازن های جامد ظرفیت پایدارتری را نگهداری می کنند و کمتر در برابر تغییرات احتمالی دما صدمه می بینند. همان طور که نمودار نشان می دهد. حتی در حداکثر دما، خازن های جامد، نسبتا ظرفیت ثابتی دارند، به ویژه در مقایسه با خازن های الکترولیتی.


    عدم انبساط خازن ها - پایداری بیشتر در برابر اورکلاکینگ
    باد کردن و سوراخ شدن خازن ها، سال ها مصرف کنندگان مادربرد را آزار می داد. این مسئله راندمان کامپیوتر ها را به شکل قابل توجهی پایین می آورد، و حتی ممکن است مادربرها را خراب کند، مادربردهایی که زیاد کار نکرده اند.
    از آن جایی که هیچ مایعی در مادربردهای جامد وجود ندارد، آنها سوراخ نشده و نمی ترکند. به علاوه توانایی آنها در مقابل تحمل شرایط سخت و مجموع قدرتشان، آنها را برای محیط های اجرایی سخت، مناسب ساخته است.


    مقایسه خازن های جامد و خازن های الکترولیتی
     

    ویژگی ها
    خازن های جامد
    خازن های الکترولیتی

    مقاومت در برابر گرما

    جریان موجی قابل استفاذه

    ESR در فرکانس بالا

    تولید SMD

    اطمینان

    حفاظت محیط زیست


    خلاصه ویژگی های خازن جامد

    خازن های جامد ESR کمتری دارند.
    منحنی فرکانس امپدانس، منحنی ایده آلی را نشان می دهد.
    مناسب برای استفاده در خازن های غیر جفت برای حذف سر و صداهایی مثل حرکات موجی، ولتاژ گذرای کوتاه مدت، دیجیتال، استاتیک، صدا و غیره.
    توانایی حل مشکل جریان موج بلند
    مناسب برای کوچک سازی، مثل هموارسازی خازن های سوییچ منبع تغذیه.
    توانایی تخلیه الکتریکی سریع
    مناسب برای استفاده در خازن های پشتیبان در مداری که جریان زیاد در آن با سرعت بالا مصرف می شود.

    ESR خازن های جامد تحت تاثیر دما قرار نمی گیرند.
    خازن های جامد می توانند برای تجهیزاتی که در دمای پایین کار می کنند (صفر درجه سانتی گراد یا پایین تر) استفاده شوند.

    خازن های جامد از عمر طولانی برخوردارند.
    شما می توانید مدت 20،000 ساعت (3 سال) کارکرد در دمای 85 درجه سانتی گراد را برای خازن های جامد پیش بینی کنید
    مناسب برای تجهیزاتی که باید برای مدت زیادی دوام داشته باشند.

+ نوشته شده در ساعت توسط ... |

what is blue ray technolgy?0

For many, home movies were originally played on the classic vcr tape. Then, the technology moved onto DVD players, and the look of the movies was sharper and of a much better quality. Now the next evolution has started with blue ray technology.

Finding out about what it is includes learning about the differences between this kind of technology and the current mass marketed DVD systems. This new kind of technology has been developing for years, since the mid 1990's when HDTV's were becoming more common for consumers to buy. A technology was needed that could record and play back the high definition recordings. Blue Ray technology was created to fill that void.

So what is so special with this technology and how is it different from the standard DVD? BR technology can store far more information that the traditional DVD, almost 5 times more storage is available on a blue ray disc. The blue ray discs use a blue ray laser to read the information where other DVD's use a red laser.

With a blue laser the wave length is shorter allowing for more storage to be used. This did cause some problems originally, as the discs were much easier to scratch. The case that held the disc had to be made more durable and was somewhat bulky. Advances in polymer coatings have advanced allowing for a better protective coating to be placed on the disc, alleviating the need for the bigger containers.

There are many companies that have a stake in the development of the next cutting edge technology and these companies are looking into both blue ray technology and a HD DVD. Some of the big companies are fighting over which technology should be used and this has caused a split in which companies support which format. Even companies that produce movies are split over which type of technology to use, which means depending on what movie a consumer wishes to purchase; they may need two different types of players.

Both the blue ray and the hddvd players are continuing to improve. In the end, consumers may discover that they enjoy both types of players and they both may be successful with consumers. Learning about what is this technology exactly can help a consumer get a good idea of basic information in regards to this new technology.

+ نوشته شده در ساعت توسط ... |

دیود زنر چیست؟(what is zener diod)
ديود زنر:
ديود هاي زنر يا شكست ، ديود هاي نيمه هادي با پيوند p-n هستند كه در ناحيه باياس معكوس كار كرده و داراي كاربردهاي زيادي در الكترونيك ، مخصوصآ به عنوان ولتاژ مبنا و يا تثبيت كننده ي ولتاژ دارند.

هنگاميكه پتانسيل الكتريكي دو سر ديود را در جهت معكوس افزايش دهيم در ولتاژ خاصي پديده شكست اتفاق مي افتد، بد ين معني كه با افزايش بيشتر ولتاژ ، جريان بطور سريع و ناگهاني افزايش خواهد داشت. ديود هاي زنر يا شكست ديود هايي هستند كه در اين ناحيه يعني ناحيه شكست كار ميكنند و ظرفيت حرارتي آنها طوري است كه قادر به تحمل محدود جريانمعيني در حالت شكست مي باشند، براي توجيه فيزيكي پديده شكست دو نوع مكانيسم وجود دارد.
مكانيسم اول در ولتاژهاي كمتر از 6 ولت براي ديودهايي كه غلظت حامل ها در آن زياد است اتفاق مي افتد و به پديده شكست زنر مشهور است. در اين نوع ديود ها به علت زياد بودن غلظت ناخالصي ها در دو قسمت p و n ، عرض منطقه ي بار فضاي پيوند باريك بوده و در نتيجه با قرار دادن يك اختلاف پتانسيل v بر روي ديود (پتانسيل معكوس) ، ميدان الكتريكي زيادي در منطقه ي پيوند ايجاد مي شود.
با افزايش پتانسيل v به حدي مي رسيمكه نيروي حاصل از ميدان الكتريكي ، يكي از پيوند هاي كووالانسي را مي شكند. با افزايش بيشتر پتانسيل دو سر ديود از انجايي كه انرژي يا نيروهاي پيوند كووالانسي باند ظرفيت در كريستال نيمه هادي تقريبأ مساوي صفر است ، پتانسيل تغيير چنداني نكرده ، بلكه تعداد بيشتري از پيوندهاي ظرفيتي شكسته شده و جريان ديود افزايش مي يابد.
آزمايش نشان ميدهد كه ضريب حرارتي ولتاژ شكست براي اين نوع ديود منفي است ، يعني با افزايش درجه حرارت ولتاژ شكست كاهش مي يا بد. بنابر اين ديود با ولتاژ كمتري به حالت شكست مي رود (انرژي باند غدغن براي سيليكن و ژرمانيم در درجه حرارت صفر مطلق بترتيب 1.21 و0.785 الكترون_ولت است، و در درجه حرارت 300 درجه كلوين اين انرژي براي سيليكن ev 1.1و براي ژرمانيم ev0.72 خواهد بود). ثابت مي شود كه مي دان الكتريكي لازم براي ايجاد پديده زنر در حدود 2*10است.
اين مقدار براي ديود هايي كه در آنها غلظت حامل ها خيلي زياد است در ولتاژهاي كمتر از 6 ولت ايجاد مي شود . براي ديودهايي كه داراي غلظت حاملهاي كمتري هستند ولتاژ شكست زنر بالاتر بوده و پديده ي ديگري بنام شكست بهمني در آنها اتفاق مي افتد (قبل از شكست زنر) كه ذيلأ به بررسي آن مي پردازيم.
مكانيسم ديگري كه براي پديده شكست ذكر مي شود ، مكانيسم شكست بهمني است. اين مكانيسم در مورد ديودهايي كه ولتاژ شكست آنها بيشتر از 6 ولت است صادق مي باشد . در اين ديود ها به علت كم بودن غلظت ناخالصي ، عرض منطقه ي بار فضا زياد بوده و ميدان الكتريكي كافي براي شكستن پيوندهاي كووالانسي بوجود نمي آيد ، بلكه حاملهاي اقليتي كه بواسطه انرژي حرارتي آزاد مي شود ، در اثر ميدان الكتريكي شتاب گرفته و انرژي جنبشي كافي بدست آورده و در بار فضا با يون هاي كريستال برخورد كرده و در نتيجه پيوندهاي كووالانسي را مي شكنند . با شكستن هر پيوند حاملهاي ايجاد شده كه خود باعث شكستن پيوند هاي بيشتر مي شوند .
بدين ترتيب پيوندها بطور تصاعدي يا زنجيري و يا بصورت پديده ي بهمني شكسته مي شوند و اين باعث مي شود كه ولتاژ دو سر ديود تقريبأ ثابت مانده و جريان آن افزايش يافته و بواسطه ي مدار خارجي محدود مي شود . چنين ديود هايي داراي ضريب درجه ي حرارتي مثبت هستند . زيرا با افزايش درجه ي حرارت اتمهاي متشكله كريستال به ارتعاش در آورده ، در نتيجه احتمال برخورد حاملهاي اقليت با يونها ، بهنگام عبور از منطقه بار فضا زيادتر مي گردد . به علت زياد شدن برخوردها احتمال اينكه انرژي جنبشي حفره يا الكترون بين دو برخورد متوالي بمقدار لازم براي شكست پيوند برسد كمتر شده و در نتيجه ولتاژ شكست.
+ نوشته شده در ساعت توسط ... |

سالید کاپاسیتور (خازن جامد) چیست؟(what is solid capacitor?q)

What is Solid Capacitor?q

Solid capacitors and electrolytic capacitors both store electricity and discharge it when needed. The difference is that solid capacitors contain a solid organic polymer, while electrolytic capacitors use a common liquid electrolyte

 

 

     
Solid capacitor
Separator sheet (electrolyte) impregnated with conductive polymer
 
Solid capacitors are composed of a highly electro-conductive polymer that dramatically improves stability and reliability

 

 

     
Aluminum electrolyte capacitor
Separator sheet (electrolyte) impregnated with electrolytic solution
 
Solid Capacitor
Aluminum Electrolyte Capacitor
+ نوشته شده در ساعت توسط ...

فایر وایر(what is fire wire)
FireWire is one of the fastest peripheral standards ever developed, which makes it great for use with multimedia peripherals such as digital video cameras and other high-speed devices like the latest hard disk drives and printers

FireWire is integrated into Power Macs, iMacs, eMacs, MacBooks, MacBook Pros, and the iPod. FireWire ports were also integrated into many other computer products dating back to the Power Macintosh G3 "Blue & White" computers. All these machines include FireWire ports that operate at up to 400 megabits per second and the latest machines include FireWire ports that support 1394b and operate at up to 800 megabits per second

FireWire is a cross-platform implementation of the high-speed serial data bus -- defined by the IEEE 1394-1995, IEEE 1394a-2000, and IEEE 1394b standards -- that can move large amounts of data between computers and peripheral devices. It features simplified cabling, hot swapping, and transfer speeds of up to 800 megabits per second (on machines that support 1394b

Major manufacturers of multimedia devices have been adopting the FireWire technology, and for good reason. FireWire speeds up the movement of multimedia data and large files and enables easy connection of digital consumer products -- including digital camcorders, digital video tapes,digital video disks, set-top boxes, and music systems -- directly to a personal computer

+ نوشته شده در ساعت توسط ... |

what is blue-ray
what is blue-ray
 
Blu-ray, also known as Blu-ray Disc (BD), is the name of a next-generation optical disc format jointly developed by the Blu-ray Disc Association (BDA), a group of the world's leading consumer electronics, personal computer and media manufacturers (including Apple, Dell, Hitachi, HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK and Thomson). The format was developed to enable recording, rewriting and playback of high-definition video (HD), as well as storing large amounts of data. The format offers more than five times the storage capacity of traditional DVDs and can hold up to 25GB on a single-layer disc and 50GB on a dual-layer disc. This extra capacity combined with the use of advanced video and audio codecs will offer consumers an unprecedented HD experience                                                      
 
 While current optical disc technologies such as DVD, DVD±R, DVD±RW, and DVD-RAM rely on a red laser to read and write data, the new format uses a blue-violet laser instead, hence the name Blu-ray. Despite the different type of lasers used, Blu-ray products can easily be made backwards compatible with CDs and DVDs through the use of a BD/DVD/CD compatible optical pickup unit. The benefit of using a blue-violet laser (405nm) is that it has a shorter wavelength than a red laser (650nm), which makes it possible to focus the laser spot with even greater precision. This allows data to be packed more tightly and stored in less space, so it's possible to fit more data on the disc even though it's the same size as a CD/DVD. This together with the change of numerical aperture to0.85 is what enables Blu-ray Discs to hold25GB/50GB                                                                                                       
+ نوشته شده در ساعت توسط ... |

system of RAM(random acces memory)1
Although basic computer RAM is a relatively simple device compared to the digital devices that get most of the attention nowadays, the internal structure of RAM is not well understood, and since RAM is such a crucial component of any device that uses a digital microprocessor, it's in the microcomputer/microelectronic technician's best interests to understand RAM.

This page will focus mainly on SRAM (Static RAM). SRAM retains the values you put in it, unlike DRAM (Dynamic RAM), which needs to be refreshed several times every second. The only real advantage DRAM has over SRAM is that it's much cheaper, so it's necessarily used for the main system RAM on most PCs. (If RAM manufacturers used SRAM for main PC RAM, the RAM in your computer would probably cost more than the CPU!) DRAM sucks, however, because the fact that it needs to be constantly refreshed makes it hugely annoying to work with. So, SRAM it will be!

On the outside, SRAM chips are pretty simple. Aside from address bus and data bus pins and two power pins, there are only three other pins on a typical SRAM chip: Chip Enable (CE), Output Enable (OE), and Write Enable (WE). On the inside, all RAM chips consist mainly of a big grid of RAM cells, tiny devices which are each capable of storing a single bit. (Of course, the RAM cells are organized into bytes. Typically, 8 bits make a byte, although this is not necessarily the case.) So we see that all an SRAM chip really has to do is use the address sent to it to select a single byte-sized line of RAM cells, enable all those cells, and if it's writing to memory, to change what's stored in those cells. A short enough explanation, but each step of the process involves devices which contain many smaller devices.

Let's start with the most fundamental part of an SRAM chip: A RAM cell. In SRAM, the RAM cells are basically D-type flip-flops, so to understand RAM cells, you need to understand D flip-flops. Before we get into D flip-flops, however, you need to understand...

The Set/Reset Latch

The set/reset latch is the most basic latch circuit. A latch is a digital electronic logic circuit which, upon receiving an instruction to change its output, will "latch" the output, so that the output does not change even after the input has been removed. The set/reset (S/R) latch looks like this internally:

Internal structure of an S/R latch

The S/R latch has two inputs and two outputs. The two inputs are labeled "Set" and "Reset". Set, when enabled, turns on the latch, and Reset turns it off. The two outputs are labeled Q and /Q. (The Q with the line over it in the diagram means "NOT Q", or the inverse of Q. Since there is no way to create a line over a character in text, usually the convention of preceding a signal with a slash is used to indicate "NOT".) Q is the main output for the latch. When the latch is on, Q will be 1. When the latch is off, Q will be 0. /Q is the opposite of Q, so when Q is 1, /Q will be 0, and vice-versa.

Not that this is an active-high S/R latch, meaning that its inputs trigger when they go high. It's possible to make an active-low S/R latch by replacing the NOR gates with NAND gates, but we won't get into that now.

As you can see, when Set goes high, the output of the NOR gate on the bottom must be 0 (because when either NOR input is high, the output is 0). This sets /Q to 0. This same 0 goes to the lower input on the NOR gate at the top; Since Reset must be low (since Set is high), both inputs to the NOR gate at the top are 0. Therefore, since both inputs are 0, it outputs a 1. This 1 hits the top input of the bottom gate, keeping the gate on (and setting Q to 1), and the latch remains stable in this state until Reset goes high. Similarly, the opposite happens when Reset goes high. The workings of this latch may seem confusing at first, but if you follow the logic paths you should be able to understand it clearly.

The R/S latch is the basis for most digital flip-flop circuits. Once you understand it, you can move on to...

The D flip-flop

The D flip-flop is quite a simple digital device with four pins; Two of these pins are inputs, and two are outputs. The chief input is the D (Data) pin, which, like any other digital signal, can receive either a 1 or a 0. The other input is the E (Enable) pin, sometimes labeled the Clock or Clk pin. The two outputs are Q and /Q (NOT Q, or the inverse of Q). However, within RAM, the /Q output of a D flip-flop is not used, and thus the flip-flop can, for purposes of using it in RAM, be reduced to a three-pin device with two input pins and one output pin. A simple enough device, indeed.

The operation of the D flip-flop is simple: The Q output reflects the D input. When the Enable or Clock pin is activated, the state of D is stored in Q. Once this happens, Q stays the same and does not change, regardless of the state of D, unless the Clock pin is triggered again. The D flip-flop thus acts as a single-bit memory storage unit: When you want to store a bit in it, you set D accordingly and pulse its clock. Once this is done, its Q output will reflect the bit stored in the flip-flop until you change it.

Black-box image of a D flip flop

Internally, the D flip-flop is basically an R/S latch with some additional circuitry added to the inputs.

Note that D flip-flops are usually "edge-triggered", meaning that they will change their state only in the moment that the Clock pin is enabled. The D flip-flop diagrammed here is not edge-triggered; The output will follow the input as long as the Clock pin is enabled. There's nothing really wrong with this in terms of using the flip-flop for RAM. We could turn it into an edge-triggered device with some more gates, but that's not necessary now.

The D flip-flop is a great device, but to make it more useful, it should come equipped with an Enable pin. Many D flip-flop chips do have Enable pins, but in keeping with the theme of illustrating the internals of these devices, it's appropriate to show you...

How to use a transistor as an Enable pin

In digital logic design, you can add an Enable signal to just about any digital signal by simply running it through a tri-state logic buffer. A digital logic buffer is just a device that takes whatever digital logic is fed into it, and outputs the exact same signal. (Sort of like a NOT gate, except without the inverting part.) A "tri-state" digital device is one which includes an Enable pin, so that you can enable or disable the output. When the Enable pin is turned off, the output goes into a "high-impedance" state in which it is essentially a dead pin, disconnected from the rest of the device. The digital logic symbol for a tri-state logic buffer looks like this:

A tri-state digital logic buffer

At the component level, a tri-state digital logic buffer is really just a single transistor. The base of the transistor acts as the Enable pin, the transistor's Collector is the logic input, and the Emitter is the logic output. Thinking about it in this way, the tri-state logic buffer would look like this:

An NPN transistor used as a tri-state digital logic buffer

Now that we know how to make a D flip-flop and put an Enable pin on it, we have...

A complete RAM cell

A typical RAM cell has only four connections: Data in (the D pin on the D flip-flop), data out (the Q pin on the D flip-flop), Write Enable (often abbreviated WE; The C pin on the D flip-flop), and Output Enable (the Enable pin which we added). Now that we have this concept, we can black-box it, which, for simplicity's sake, I will do on this web page from this point henceforth. Our RAM cell, made into a logic block, looks like this:

Black-box image of a RAM cell

If you've understood everything thus far, you're almost done with understanding how SRAM works. You already know how one individual memory cell works, so now the trick is to just arrange them in an array so that you can address each one independently. To do this, we need to be able to take memory addresses and use them appropriately, so the next thing we'll learn is...

How an address decoder works

An address decoder is a device which reads in a binary-represented memory address, and based on the address it receives, turns on a single output. If an address decoder has n inputs, then it will have 2^n (2 to the power of n) outputs. At any point in time, only one output line is on, and all the others are off. The decoder must have a separate output for every byte in memory. Since a byte is 8 bits (usually), and every RAM cell is one bit, each output from the memory decoder goes to 8 RAM cells.

For simplicity's sake, we'll illustrate two small-scale memory decoders: The 2-to-4 decoder, and the 3-to-8 decoder. In reality, a modern RAM chip would have much larger decoders than this; An 8 kilobyte RAM chip (which is quite small by today's standards) would have a built-in 13-to-8192 decoder, but trying to draw that and represent it here on this website would probably be overkill.

An address decoder is a form of combinatorial circuit; The idea behind it is that for every possible combination of inputs, there needs to be a separate output that will activate. For example, suppose we have a 2-to-4 decoder; This is a decoder with two inputs and four outputs. The idea is that for every possible combination of 1s and 0s on the two inputs, a different output needs to activate. There are four possible ways to put 1s and 0s on two inputs: 00, 01, 10, and 11. If we call the inputs "in0" and "in1" and the outputs "out0" to "out3", then the 2-to-4's truth table looks something like this:

in0 in1 ³ out0 out1 out2 out3
ؤؤؤؤؤؤؤؤإؤؤؤؤؤؤؤؤؤؤؤؤؤؤؤؤؤؤؤؤ
 0   0  ³  1    0    0    0
        ³
 0   1  ³  0    1    0    0
        ³
 1   0  ³  0    0    1    0
        ³
 1   1  ³  0    0    0    1

The circuit diagram for a decoder might look complicated at first, but actually, it can be pieced together from a pretty simple idea, so just before I show you the diagram for one, let me try to explain the concept: To make a decoder, you attach two wires to each input. One wire simply comes directly from the input, while the other wire passes through a NOT gate (an inverter, which sets a logic 0 to a 1, and vice-versa). Once this is done, you have something that looks like this:

Two memory address inputs, split into paths that pass through inverters, and ones which don't

After this, take one wire from each input, and connect the ends of them to an AND gate. The output of the AND gate then becomes one of the address decoder's outputs. Add different AND gates for each possible combination of inputs, and you're done. Each outputting AND gate must have a different combination of input triggers. This way, only one output will ever turn on at a time.

For example, while making our 2-to-4 decoder, suppose you just happen to take the input from in0 that DOESN'T pass through an inverter, and the input from in1 which DOES pass through an inverter. It should be clear that in order for both of these wires to be holding a logic 1, in0 needs to be on, and in1 needs to be off. This corresponds with the third line of the truth table above, so after you connect these two wires to the inputs of an AND gate, the output for that AND gate becomes out2.

Whether or not you understand what was written above, perhaps the diagram below of a 2-to-4 decoder will make things clearer now:

Internal circuit diagram of a 2-to-4 address decoder

The red lines indicate the wires going to the AND gate at the top. The output of this AND gate will come on only when both inputs to the decoder are on. The green lines are for the second-highest AND gate, which will energize when the lower input is on, but the top input is off. The purple wires signify the AND gate which will activate when the top input is on but the bottom one is off, and finally, the blue lines lead to the AND gate for when both inputs are off. There are four possible combinations of input to this decoder, and each has a corresponding single output. A 3-to-8 decoder works the same way, except it would have eight AND gates at the right, three inputs, a NOT gate for each input, and more wiring.

Now that we know how to make a RAM cell, a tri-state buffer, and an address decoder, we have all the sub-circuits we need to make a complete, working RAM array. It's time to put it all together.

An SRAM array

For this introduction, we'll illustrate a 4x2 SRAM array. RAM arrays are designated as bxw, where b is the number of bytes in the array, and w is the byte width, meaning how many bits are in each byte. Thus, our 4x2 RAM array has 4 bytes, and each byte contains two bits. (Most RAM arrays that you see in electronics parts catalogs will be somethingx8, because it's pretty typical to have 8 bits in a byte, but it's good to be different sometimes, and having only two bits to a byte makes things easier to draw, too.)

In an SRAM array, the RAM cells are arranged and wired up as follows:

Typically, when diagrammed, a row of RAM cells represents one byte, and each column represents one bit in each byte. So in our 4x2 RAM array, we would have 4 rows (because we have 4 bytes total), and each row will have be two columns wide (because each has two bits):

RAM cells placed in an array

The Enable pins on the RAM cells lead from the outputs on the address decoder. Thus, each output on the address decoder goes to each RAM cell in one row of the RAM array (because you want to enable all of the bits in each byte when that byte is accessed).

The address decoder connected to the RAM cells' Enable pins

The outputs from the address decoder are also ANDed with the Write Enable signal to go to the cells' Write Enable pin. That way, when both a particular address AND the Write Enable signal are on, the data gets put into the cell. (Please note that at this point, I lost my patience with trying to draw all this, and so the picture below is only half-done; The output of the AND gate in each row should be going to the RAM cell on the right as well, but things got a bit too cluttered to provide any easy way for me to add this.)

The Write Enable signal hooked up to the RAM cells

The only thing remaining is the data pins. They can simply be left as they are, to provide two separate data buses (one for data in, the other for data out), but microprocessors usually expect to use a bidirectional data bus. To achieve this, the data bus is connected to both the Data In pins and the Data Out pins of the RAM cells. This gives you a bidirectional data bus, but there needs to be a bit of additional circuitry added to this so that data only flows into the RAM array of out of it, but not both. And so we come to...

Making the data bus bidirectional

To ensure that data only flows in one direction at a time (either coming out of the RAM array or going into it), two diodes are used right next to each RAM cell, so that data only goes into it or out from it, but not both:

Graphic representation of how a RAM cell is wired to a bidirectional data bus

And there you have it. That's pretty much how RAM works. At least SRAM, anyway. And you don't really want to know how DRAM works, because DRAM sucks. Everybody should use SRAM and eliminate DRAM altogether.

+ نوشته شده در ساعت توسط ... |

های فای چیست؟(what is HI FI)
Hi-fi is simply the shortened term for high fidelity. It became popular in the 1950s and was used to describe the reproduction of images or sound in their purest form. Hi-fi is most often associated with sound, such as music. Hi-fi means that reproductions are clear, are generally free of background noise, and offer minimal distortion. Since hi-fi equipment is meant to make reproductions as true to the original as possible, enhancements are limited.

High fidelity audio and visual components were at first treated with skepticism. Many people didn’t believe there was much of a difference and thought that hi-fi was a gimmick to sell more costly equipment. Enthusiasts soon learned that hi-fi did indeed offer higher quality reproduction. Hi-fi components became so popular that the term was used to refer to the components themselves as well as to the technology. For example, when referring to a record player or turntable, people might say, “Put a record on the hi-fi.”

Today, the term hi-fi is used to describe any sound system of above average quality. It also refers to other components that make up home theater systems. It may include everything from your television, DVD, and satelitte receiver, to your compact disc player, other stereo components, and sorround speakers.

Much like computer enthusiasts, hi-fi enthusiasts enjoy putting together custom systems. Just as a computer enthusiast will choose separate components from diverse manufacturers in order to take advantage of certain specialties, a hi-fi enthusiast will do the same. Not only is this the best way to create a unique, high quality system, but it also allows the enthusiast to build the system one piece at a time, giving her greater freedom to spend more on each component. Instead of putting out a lot of money at one time to purchase an entire system, one can build a collection of high quality components at his own pace. This is also important when it comes to upgrading hi-fi equipment, as the enthusiast can simply replace one piece at a time.

+ نوشته شده در ساعت توسط ... |

دی وی دی(DIGITAL  VERSATILE  DISK)
دی وی دی ها انواع مختلفی دارند که هر کدام از آنها بسته به نیاز مشتری مزایایی دارد. این انواع عبارتند از: DVD-RAM , DVD-R, DVD-RW,DVD+RW,DVD+R وقتی میخواهید یک دی وی دی بخرید باید دقت کنید که دستگاه پخش دی وی دی شما کدام نوع دی وی دی را ساپورت میکند تا دچار مشکل نشوید. برای خرید دی وی دی همچنین باید نوع نیاز خود را نیز در نظر بگیرید. مثلا:برای به وجود آوردن یک نسخه پشتیبان از برنامه های روی هارد کامپیوترتان DVD-RAM مناسب ترین گزنه است یا برای دی وی دی که قابل پخش توسط دستگاههای پخش دی وی دی خانگی باشد DVD-R بهترین انتخاب است. پس به شما توصیه میکنیم همیشه مطمئن شوید دی وی دی پلیرتان آن نوع خاص دی وی دی که مد نظر شماست را ساپوت میکند.((این موضوع معملا در مشخصات دی وی دی پلیر در دفترچه ی راهنمای آن قید شده است.))در این جا به توضیحی کوتاه در مورد هر نوع می پردازیم:

DVD-RAM مخفف " DVD Random Access Memory " , در هر طرف ۴.۷ گیگابایت ظرفیت ثبت اطلاعات را دارد. همچنین یک DVD-RAM میتواند دو رویه باشد و ظرفیتی برابر ۹.۴ گیگابایت فضا برای ذخیره داشته باشد.و قابلیت overwritting را تا ۱۰۰۰ بار دارا میباشد.

DVD-R دو بر دو نوع است Autuering و General use.DVD-R ظرفیتی برابر ۴.۷ گیگابایت در هر طرف دارد.نوع Authering‌ آن برای پاسخ به نیاز حرفه ای ها و پدید آورندگان نرم افزارها به وجود آمد و نوع General آن برای تجارت و استفاده ی مصرف کنندگان به وجود آمد.DVD-R تنها یک بار رایت میشود و هر دو نوع آن توسط اکثریت DVD Player ها ساپورت میشود.

DVD-RW این نوع از دی وی دی نوع قابل رایت مجدد میباشد و ظرفیت آن ۴.۷ گیگ در هر طرف است و قابلیت رایت مجدد تا ۱۰۰۰ بار را داراست. عمر اطلاعات ذخیره شده روی این نوع از دی وی دی بین ۳۰ تا ۵۰ سال میباشد. قابل ذکر این که استفاده اصلی این نوع دی وی دی ها برای ضبط ویدئو ها است.

DVD+RW این نوع شباهت زیادی با DVD-RW دارد اما مورد مصرف آن video فایلها یا DATA از هر نوع یا ترکیبی از این دو است.

DVD+R تنها یک بار رایت میشود اما در بقیه ی خصوصیات با DVD+RW مشترک است.
+ نوشته شده در ساعت توسط ... |

آشكارساز تناسبي چيست؟

آشكارساز تناسبي چيست؟

آشكارساز تناسبي نوعي آشكارساز گازي با دو الكترود ، يكي استوانه و يكي سيمي‌ در راستاي محور استوانه است. وقتي آشكارساز در ناحيه‌اي (ازلحاظ ولتاژ بين الكترودها) كار كند كه در آن شماره يونهاي ايجاد شده ، متناسب با انرژي اشعه باشد. در اين صورت آشكارساز تناسبي نام دارد. ولتاژ اعمال شده در اين آشكارساز بيشتر از ولتاژ اعمال شده در اتاقك يونيزاسيون مي‌‌باشد كه ولتاژ اعمال شده بين دو الكترود به اندازه‌اي بزرگ است كه الكترون يونش يافته يك اتم انرژي كافي درحركت به سوي الكترود آند بدست مي‌‌آورد و انرژي الكترون به اندازه‌اي است كه موجب يونش اتمهايي در مسير خود مي‌شود.

مشخصات و طرز كار آشكارساز تناسبي

آشكارساز تناسبي از يك الكترود سيلندري و يك رشته سيم مركزي كه معمولا از تنگستن مي‌باشد، ساخته مي‌شوند. به دليل وضع هندسي دستگاه ميدان الكتريكي در فاصله x از سيم برابر است با (E=V/xLn(b/a كه درآن V ولتاژ وصل شده بين الكترودها و a و b به ترتيب شعاعهاي سيم و الكترود خارجي مي‌‌باشند. ميدان الكتريكي در نزديك رشته سيم خيلي بزرگتر است و با فاصله از سيم نسبت عكس دارد. بنابراين بيشترين تكثير در نزديكي سيم مركزي انجام مي‌‌پذيرد. حدود نصف از زوجهاي يون در فاصله‌اي برابر با متوسط طول آزاد و 99% زوجهاي يون در هفت برابر متوسط طول آزاد از الكترود مركزي تشكيل مي‌گردند. زمان جمع آوري الكترون‌ها خيلي كوچك است. به هرحال چون الكترون‌ها خيلي نزديك به الكترود مركزي ايجاد مي‌‌‌شوند، v? مربوط به جمع آوري الكترون در الكترود مركزي خيلي كوچك مي‌باشد.

بنابراين سهم بيشتر سقوط پتانسيل مربوط به يونهاي مثبت است. وجود اين كه يونهاي مثبت كندتر از الكترون‌ها هستند، پس از عبور مسافت كمي‌ از سيم مركزي بيشترين سقوط پتانسيل را درفاصله زماني كوتاه بوجود مي‌‌آورند. درنتيجه ، پالس مربوط به رسيدن يك زوج يون ابتدا خيلي سريع و سپس به كندي صعود مي‌نمايد. گاهي اوقات وقتي محل تشكيل هر يك از يونها نسبت به الكترود مركزي متفاوت باشد، زمان تشكيل پالس‌ها نامشخص خواهدبود. در چنين حالتي زمان لازم براي الكترون‌هاي مختلف در رسيدن به ناحيه تكثير يكسان نخواهد بود. تقويت كننده‌هاي مرحله اول يونها را جمع آوري مي‌كنند تا اين نامعلومي‌ را كاهش دهند.

زمان تفكيك

در آشكارساز تناسبي ، يونيزاسيون محدود به ناحيه اطراف مسير اشعه مي‌باشد. فرض كنيم كه تابش 1 در زمان t1 وارد شمارنده مي‌شود و تابش مشابه 2 در يك ناحيه ديگر در زمان t2 وارد آشكارساز مي‌شود. در الكترود جمع كننده سقوط پتانسيل خواهيم داشت. اگر تقويت كننده دستگاه آشكارساز بتواند اين تغييير ولتاژ را به عنوان دو علامت الكتريكي تشخيص دهد و اگر اين كمترين زمان جدايي باشد كه اين تشخيص امكانپذير مي‌گردد، در اين صورت t2-t1 زمان تفكيك (Resolving time) براي آشكارساز تناسبي است. بنابراين زمان تفكيك (T) تابع سيستم الكتريكي است.

اگر زمان تفكيك صفر باشد، تغيير تعداد شمارش برحسب تغيير تعداد تابش بايد يك خط مستقيم باشد. به هرحال اگر زمان تفكيك بينهايت باشد، اين منحني در سيستم مختصات y-x به محور x متمايل شده و بالاخره آن را قطع خواهد نمود. يعني وقتي تعداد تابشهايي كه وارد آشكارساز مي‌‌شوند افزايش يابد، تعداد شمارش ثبت شده ابتدا افزايش مي‌يابد و بعد از رسيدن به يك ماكزيمم به طرف صفر ميل مي‌كند. در اين ميزان شمارش صفر ، ولتاژ الكترود جمع كننده ثابت مي‌‌ماند. زيرا كه ميزان جمع آوري يونها برابر ميزان نشت يونها خواهد بود.

آشكارساز تناسبي حساس نسبت به محل ورود اشعه

يكي از تفاوتهاي اساسي بين آشكارساز تناسبي و آشكارساز گايگر مولر اين است كه در آشكارساز تناسبي ، يونيزاسيون محدود به ناحيه كوچكي در اطراف مسير ذره تابشي است. در صورتي كه در آشكارساز گايگر يونيزاسيون در تمام حجم آشكارساز انجام مي‌شود. بنابراين در آشكارسازهاي تناسبي ، امكان اين كه اطلاعاتي در مورد محل اشعه تابشي بدست آوريم، وجود دارد. در اين نوع از آشكارسازها ، آند از يك سيم با مقاومت زياد (معمولا رشته كوارتز با پوششي از كربن) تشكيل مي‌شود. فرض كنيم ذره تابشي در وضعيت x يونهايي در مجاورت آند ايجاد مي‌‌نمايد. اين يونها بوسيله آند جمع آوري شده و باعث جاري شدن جريان در دو جهت در طول آند خواهد شد. مقدار جرياني كه از هر جهت جاري مي‌شود تابع مقاومت در مسير مي‌باشد. به دليل تفاوت جريان در دو انتهاي آند پالس‌هاي ايجاد شده در دو انتهاي آند در ارتفاع و زمان صعود متفاوت خواهند بود. تفاوت در زمان صعود ، به دليل تفاوت در ثابت زماني ، معمولا براي بدست آوردن اطلاعات درباره محل اشعه بكار مي‌رود.

شمارش نوترون با آشكارساز تناسبي

علاوه بر اينكه مي‌توان از آشكارساز تناسبي براي آشكارسازي ذرات آلفا و بتا استفاده نمود. اين آشكارساز مي‌تواند در آشكارسازي نوترونها نيز مورد استفاده قرار گيرند. يك آشكارساز واقعي نوترون معمولا گاز BF خالص و يا مخلوطي از BF3 و يكي از گازهاي استاندارد آشكارسازهاي گازي ، مي‌باشد. وقتي كه نوترون حرارتي بوسيله هسته جذب مي‌شود، دو ذره يونيزه كننده قوي يكي ذره آلفا و ديگري هسته ليتيم كه در جهت مخالف حركت ذره آلفا حركت مي‌‌كند، رها مي‌شوند. پالسهاي ايجاد شده بوسيله محصولات واكنش هسته‌اي در مقايسه با پالس‌هاي بوجود آمده بسيله تابشهاي نظير اشعه گاما ، داراي ارتفاع نسبتا بزرگ است.

رابطه ارتفاع پالس با نوع ذره

نكته‌اي كه وجود دارد رابطه ارتفاع پالس و نوع ذره است. ارتفاع پالس‌هاي ايجاد شده با ذرات يونيزه كننده سنگين مانند ذرات آلفا ، ممكن است بطور قابل ملاحظه‌اي از پالس‌هاي بوجود آمده بوسيله الكترون‌هاي با انرژي برابر ، متفاوت باشد. اين اختلاف تابع نوع اشعه است كه معمولا براي آشكارسازهاي گازي ، كوچك مي‌‌باشد. در مورد آشكارسازهاي تناسبي و يونيزاسيون و آشكارساز نيم رسانا اين حالت وجود دارد.
+ نوشته شده در ساعت توسط ... |

كارت هوشمند چيست وچگونه كار مي‌كند؟(what is an smart card and how it work)

كارت هوشمند چيست وچگونه كار مي‌كند؟

يك كارت هوشمند از نظر اندازه شبيه به كارت‌هاي اعتباري پلاستيكي كه يك تراشه در آن كار گذاشته شده است مي‌باشد. قرار دادن يك تراشه در كارت به جاي نوار مغناطيسي، آن را تبديل به يك كارت هوشمند با قدرت سرويس‌دهي در مصارف گوناگون مي‌نمايد. اين كارت‌ها به دليل دارا بودن تراشه، داراي قابليت كنترل عملكرد بوده و فقط اطلاعات مربوط شخصي و تجاري كاربر واجد شرايط را پردازش مي‌نمايد.

كارت هوشمند قابليت استفاده در انواع معاملات بانكي و پشتيباني مالي را دارد و به دليل راحتي حمل و نقل و امنيت موجب آسايش خيال كاربر وتامين اطلاعات گوناگون مورد نياز وي مي‌گردد. استفاده از امكانات متنوع كارت‌هاي هوشمند به تجار اين امكان را مي‌دهد كه محصولات و كالاهاي خود را در بازارهاي جهاني ارائه وفعاليت‌هاي تجاري خود راگسترش دهند. بانك‌ها، شركت‌هاي نرم‌افزاري وسخت افزاري، خطوط هوايي وهمه اين شانس را خواهند داشت كه به بهره‌مندي از خدمات نوين محصولات كارتي خود در جهت ارتقاء سطح فعاليت‌ها و ارائه محصولاتشان دست يابند.

تركيب امكانات نهفته در كارت‌هاي هوشمند سبب ايجاد ارتباط نزديك‌تر ميان طرفين تجاري وآنهايي مي‌گردد كه در اقصي نقاط دنيا به نحوي با يكديگر داراي روابط تجاري مي‌باشند.

امروزه در دنيا بيش از 4/4 ميليارد كارت اعتباري استفاده مي‌شود. فعاليت‌هاي اقتصادي - مالي مبتني بر كارت‌هاي هوشمند به ميزان 30 درصد در سال رشد دارد. همچنين تحقيقات انجام شده حاكي از آن است كه در سراسر دنيا طي 5 سال آينده صنعت كارت‌هاي هوشمند و وسايل و تجهيزاتي كه امكان استفاده از آن را ميسر مي‌سازند به طور قابل توجهي رشد خواهد داشت وهمچنين افزايش امكانات وقابليت‌هاي دستيابي با امنيت كافي به شبكه‌هاي كامپيوتري وتوسعه رو به رشد استفاده از تجارت الكترونيكي سبب رايج‌تر شدن بكارگيري كارت‌هاي هوشمند مي‌گردد.

با در نظر گرفتن همين ميزان مصرف، انتظار مي‌رود كارت‌هاي هوشمند براي 95درصد خدمات تلفن بي‌سيم وديجيتالي كه در تمام دنيا ارائه مي‌شود مورد بهره‌برداري قرار گيرند. آسيا، آمريكاي لاتين وآمريكاي شمالي مناطقي هستند كه بالاترين پتانسيل را در 3 سال آينده براي گرايش به استفاده از كارت‌هاي هوشمند به‌خود اختصاص خواهند داد. 

اكنون بيشترين زمينه‌هاي كاربري از كارت‌هاي هوشمند در سطح دنيا مربوط به تلفن‌هاي پولي وبي‌سيم، بانكداري، خدمات بهداشتي و پرداخت آبونمان و لوازم خانگي بوده است.

چرا كارت‌هاي هوشمند تا اين اندازه متداول شده‌اند؟

با وجودي كه در حال حاضر ميلياردها كارت هوشمند در دنياي فعلي در دست كاربران قرار دارد، اما ممكن است فردي كارت را از يك كشور خاص تهيه نمايد و بخواهد از آن در ساير كشورها استفاده كند. توليدكنندگان تجهيزات و ارائه‌دهندگان كارت‌هاي هوشمند براي تامين چنين كاربردهايي، تكنولوژي كارت‌هاي چند منظوره را ايجاد كرده ودر تلاش هستند تا نوعي سازگاري ميان تجهيزات وكارت هاي توزيع شده در سراسر دنيا به وجود آورند.براي تحقق بخشيدن به اين مساله بايد اصول تجاري و فني مورد نياز واصول استاندارد و هماهنگ با هر كشور، ميان كارت‌ها و پايانه‌ها و مشخصه‌هاي موجود در تجهيزات وسايل  ايجاد و مورد آزمايش قرار گيرند. كليد اصلي در دستيابي به اين امر جهاني در دست صنعت مربوطه قرار دارد.

استاندارد چه نقشي را در كارآيي كارت‌هاي هوشمند ايفا مي‌كند؟

استانداردها در واقع عواملي هستند كه، هماهنگي وتطابق ميان كارت‌ها و وسايل كارت‌خوان يا پشتيباني كننده را تضمين مي‌نمايند. وجود استانداردهاي جهاني و ثابت در اين امر باعث مي‌شود تا كارهاي توليد و توزيع شده در يك قسمت از دنيا به وسيله دستگاهي در بخش ديگري از دنيا پذيرفته شده و مورد استفاده قرار گيرند.

صنايع، خدمات و فعاليت‌هاي بسياري وجود دارد كه از طريق اعمال استانداردها و ضوابط بين‌المللي مي‌توان عملكرد آنها را تحت پوشش كارت‌هاي هوشمند قرار داد كه دستگاه‌هاي پمپ بنزين، سيستم‌هاي پرداخت بانكي و بسياري موارد ديگر از اين قبيل هستند. به همين دليل سازمان بين‌المللي استاندارد، اصولي را براي كارت‌هاي هوشمند ايجاد و تثبيت كرده است و اين اصول همچنان در حال توسعه و همه‌گير شدن هستند.

همچنين بخشي از صنايع انحصاري موفق شده‌اند اصول و استانداردهاي مشخصي را براي استفاده از كارت‌هاي هوشمند به وجود آورده و هم اكنون  در حال گسترش و تثبيت آنها در سراسر دنيا مي‌باشند. لذا حضور وسيع حضور نمايي مزيت‌هاي فراوان موجود دركارت‌هاي هوشمند صنايع و خدمات مختلف جهاني را بر آن داشته تا با ارائه ضوابط و استانداردهاي مدون و قانوني موفقيت آنها را تضمين نمايند.

* مزاياي عمده‌اي كه كارت‌هاي هوشمند به مصرف كننده ارائه مي‌دهند چگونه ارزيابي مي‌شود؟

البته مزاياي كارت‌هاي هوشمند را بايد با در نظر گرفتن كاربردها و نحوه مديريت و ايجاد زيرساخت‌هاي فرهنگي و تخصصي در هر جامعه بررسي نمود. عموما دستورالعمل‌ها و استاندارد محلي وضع شده و نحوه برخورد و حمايت قانون از كاربردهاي اين كارت‌ها در ارتقاء مزاياي آن مؤثر مي‌باشد. شيوه زندگي و اهميت دستيابي به اطلاعات و چگونگي پردازش آنها و قوانين موجود در تنظيم روابط مالي نيز در تعريف مزاياي كارت‌هاي هوشمند براي هر منطقه از دنيا حائز اهميت است كه نمي‌توان آنها را ناديده گرفت. با اين وجود مزاياي عمده اهداف اصلي ايجاد سيستم‌هاي بكارگيري كارت‌هاي هوشمند مي‌توان در توانايي اداره يا كنترل مؤثر فعاليت‌هاي تجاري كاهش چشمگير كلاهبرداري، كاهش كاغذبازي وحذف فعاليت‌هاي زائد و وقت‌گير خلاصه نمود.

كارت هوشمند چند منظوره چيست؟

كارت هوشمند، براي راحت‌تر شدن و كاهش فعاليت‌هاي زائد در امور تجاري و غيره توليد گرديده، فعاليت‌هايي از قبيل (خريد و فروش، برنامه هاي بهداشتي، خدمات بانكي، خدمات مسافرتي و...). اگر قرار باشد براي انجام هر يك از فعاليت‌هاي فوق يك كارت هوشمند اختصاص يابد، آنگاه تعداد كارت‌ها خود مشكل جديدي مي‌شود كه بر تمايلات كاربران تأثير منفي گذاشته و از كارآيي آن نيز مي‌كاهد.

يك كارت چند منظوره پاسخ مناسبي براي اين موضوع است زيرا كارت چند منظوره مي‌تواند انواع مختلفي از كارت‌ها را پشتيباني نمايد.

به عنوان مثال كارت چند منظوره "ويزا" كارتي مي‌باشد كه تركيبي از اعتبار توسعه يافته ويزا در برگيرنده ستون بدهي و توابع ذخيره مالي و ذخيره‌سازي ميزان اعتبار مالي مي‌تواند در مسافرت‌ها كارآيي فراواني داشته باشد.

كارت‌هاي چند منظوره با تحت پوشش قرار دادن موضوعات متنوعي از عمليات خريدها وخدمات گوناگون مالي موجبات آسايش كاربران را فراهم ساخته است.

كارت اعتباري بدون تماس چيست؟

دو نوع كارت اعتباري بدون تماس وجود دارد. اولي يك كارت بدون تماس از راه نزديك است كه با وارد كردن آن در يك دستگاه جانبي مخصوص خوانده مي‌شود. و دومين كارت بدون تماس از راه دور است كه بدون استفاده از دستگاه جانبي كارت‌خوان قادر است از يك مسافرت معين و به صورت كنترل از راه دور خوانده شود كه در دكه‌هاي دريافت عوارض كاربرد زيادي دارد.

قيمت يك كارت تراشه ‌دار چقدر است؟

در تلاش براي پاسخ دادن به اين سئوال كه بيشتر مانند پرسيدن قيمت ماشين، بدون در نظر گرفتن اينكه يك فولكس واگن دسته دوم و قديمي است و يا يك رولزرويس آخرين مدل، بايد گفت بهاي كارت‌هاي تراشه‌دار 15 الي 80 درصد بستگي به ظرفيت آنها و كميت اعتباري داشته و در اين محدوده متغير است.

چرا بارگذاري (شارژ) مجدد يك كارت هوشمند اهميت دارد؟

كارت‌هاي يكبار مصرف و قابل شارژ مجدد، هر دو از بازارهاي مصرف و كاربري برخوردار هستند. كارت‌هاي يكبار مصرف در مواقعي كه كاربر در مسافرت به سر مي‌برد و يا به منظور پرداخت وروديه‌ها و مصارفي شبيه اينها مورد استفاده قرار مي‌‌گيرند و عمدتا استفاده از آن براي يك زمان مشخص مي‌باشد كه پس از اتمام ذخيره، فاقد ارزش و بهره‌برداري مي‌باشد و دور انداخته مي‌شود.

اگر كارت مورد بحث چند منظوره باشد و مثلا ارزش‌ها و اعتبارات را ذخيره كرده و حساب‌هاي بدهكار و بستانكار كاربر را ثبت نمايد، كاربر آن را دور نخواهد انداخت. صحيح‌تر خواهد بود كه انرژي (اعتبار)  ذخيره شده، قابل شارژ يا بارگذاري مجدد بوده و كاربر مجبور به خريد مكرر كارت‌هاي يكبار مصرف نگردد.

كارت‌هاي اعتباري تا چه اندازه ايمن و مطمئن هستند؟

كارت‌هاي هوشمند عملا امنيت و اطمينان بيشتري نسبت به ساير وسايل ذخيره اطلاعات مالي ارائه مي‌دهند. يك كارت هوشمند مكان امني براي ذخيره اطلاعات گران‌بهايي مثل كليدهاي اختصاصي، شماره حساب‌ها، رمزها يا ساير اطلاعات خصوصي ارزشمند مي‌باشد. كارت‌هاي هوشمند با قدرت انجام محاسبه‌هاي پيچيده قابليت تأمين امنيت بالاتر را دارا هستند و سلامت كاري صاحب كارت را فراهم مي‌سازند.

آيا رهنمودهايي براي مصرف‌كننده در استفاده از كارت‌هاي هوشمند وجود دارد؟

بله، براي اولين بار شركت‌هاي توليدي كارت هوشمند، اطلاعاتي را در رابطه با صنعت و توزيع‌كنندگان كارت هوشمند، روش‌هايي عمومي و قانوني ارائه كردند. درك و شناخت صحيح اين رهنمودها بسيار مهم است، خصوصا اينكه براي اولين بار اين اطلاعات توسعه صنايع چندگانه به طور داوطلبانه پذيرفته شده و در حال تكامل است.

* انتظارات شخصي مصرف‌كنندگان را شناسايي كرده و در نظر بگيريد و رهنمودهاي شخصي ارائه شده را در مورد آنان اجرا نماييد.

* به منظور تأمين خدمات بهتر و ارائه فرصت‌هاي جديد به مصرف‌كننده، استفاده، جمع‌آوري و نگهداري اطلاعات مربوطه به آنها را (تا حدي كه نياز است)تهيه و بايد كامل شود.

* وسيله‌اي را براي مصرف‌كنندگان تهيه و در محل‌هاي مختلف تعبيه كنيد تا اسامي آنان را به بازار و با شركت به طور مستقيم يا پست و يا موارد درخواستي ديگر ارسال نمايد.

* روش‌هاي انجام شده و در دسترس، كارمند را از نظر شخصي محدود مي‌سازد.

 

+ نوشته شده در ساعت توسط ... |

علم رباتیک چیست؟

رباتيك: علم شناخت و طراحی آدمک های مصنوعی و هوشمند
ربات چيست؟
ربات يك ماشين الکترومکانيكی هوشمند است با خصوصيات زير:
·  
می توان آن را مکرراً برنامه ريزی کرد.
·  
چند کاره است.
·  
کارآمد و مناسب برای محيط است.
اجزای يك روبات:
·  
وسايل مکانيكی و الکتريكی:
شاسی، موتورها، منبع تغذيه، ...

·  
حسگرها (برای شناسايي محيط):
دورين ها، سنسورهای sonar، سنسورهای ultrasound، ...
·  
عملکردها (برای انجام اعمال لازم)
بازوی روبات، چرخها، پاها، ...
·  
قسمت تصميم گيري (برنامه ای برای تعيين اعمال لازم):
حرکت در يك جهت خاص، دوری از موانع، برداشتن اجسام، ...
·  
قسمت کنترل (برای راه اندازی و بررسی حرکات روبات):
نيروها و گشتاورهای موتورها برای سرعت مورد نظر، جهت مورد نظر، کنترل مسير، ...

تاريخچه روباتيك:
-
حدود سال 1250 م: بیشاپ آلبرتوس ماگنوس (Bishop Albertus Magnus) ضیافتی ترتیب داد که       درآن، میزبانان آهنی از مهمانان پذیرایی می کردند. با دیدن این روبات، سنت توماس آکویناس (Thomas Aquinas) برآشفته شد، میزبان آهنی را تکه تکه کرد و بیشاب را ساحر و جادوگر خواند.
-
سال 1640 م: دکارت ماشين خودکاری به صورت يك خانم ساخت و آن را Ma fille Francine " می نامید.
این ماشين که دکارت را در يك سفر دریایی همراهی می کرد، توسط کاپیتان کشتی به آب پرتاب شد چرا که وی تصور می کرد این موجود ساخته شیطان است.
-
سال 1738 م: ژاک دواکانسن (Jacques de Vaucanson) يك اردک مکانيكی  ساخت که از بیش از 4000 قطعه تشکیل شده بود.
این اردک می توانست از خود صدا تولید کند، شنا کند، آب بنوشد، دانه بخورد و آن را هضم و سپس دفع کند. امروزه در مورد محل نگهداری این اردک اطلاعی در دست نیست.
-
سال 1805 م: عروسکی توسط میلاردت (Maillardet) ساخته شد که می توانست به زبان انگلیسی و فرانسوی بنویسد و مناظری را نقاشی کند.
-
سال 1923 م: کارل چاپک (Karel Capek) برای اولین بار از کلمه روبات (robot) در نمایشنامه خود به عنوان آدم مصنوعی استفاده کرد. کلمه روبات از کلمه چک robota گرفته شده است که به معنی برده و کارگر مزدور است. موضوع نمایشنامه چاپک، کنترل انسانها توسط روباتها بود، ولی او هرگونه امکان جایگزینی انسان با روبات و یا اینکه روباتها از احساس برخوردار شوند، عاشق شوند، یا تنفر پیدا کنند را رد می کرد.
-
سال 1940 م: شرکت وستینگهاوس (Westinghouse Co.) سگی به نام اسپارکو (Sparko) ساخت که هم از قطعات مکانيكی و هم الکتريكی در ساخب آن استفاده شده بود. این اولین باری بود که از قطعات الکتريكی نیز همراه با قطعات مکانيكی استفاده می شد.
-
سال 1942 م: کلمه روباتيك (robatics) اولین بار توسط ایزاک آسیموف در يك داستان کوتاه ارائه شد. ایزاک آسیموف (1920-1992) نویسنده کتابهای توصیفی درباره علوم و داستانهای علمی تخیلی است.
-
دهه 1950 م: تکنولوژی کامپیوتر پیشرفت کرد و صنعت کنترل متحول شد. سؤلاتی مطرح شدند. مثلاً: آیا

کامپیوتر يك روبات غیر متحرک است؟
-
سال 1954 م: عصر روبات ها با ارائه اولین روبات آدم نما توسط جرج دوول (George Devol) شروع شد.
امروزه، 90% روباتها، روباتهای صنعتی هستند، یعنی روباتهایی که در کارخانه ها، آزمایشگاهها، انبارها، نیروگاهها، بیمارستانها، و بخشهای مشابه به کارگرفته می شوند.
در سالهای قبل، اکثر روباتهای صنعتی در کارخانه های خودروسازی به کارگرفته می شدند، ولی امروزه تنها حدود نیمی از روباتهای موجود در دنیا در کارخانه های خودروسازی به کار گرفته می شوند.
مصارف روباتها در همه ابعاد زندگی انسان به سرعت در حال گسترش است تا کارهای سخت و خطرناک را به جای انسان انجام دهند.  

برای مثال امروزه برای بررسی وضعیت داخلی رآکتورها از روبات استفاده می شود تا تشعشعات رادیواکتیو به انسانها صدمه نزند.

-
سال 1956 م: پس از توسعه فعالیتهای تکنولوژی یک که بعد از جنگ جهانی دوم، یک ملاقات تاریخی بین جورج سی.دوول(George C.Devol) مخترع و کارآفرین صاحب نام، و ژوزف اف.انگلبرگر (Joseph F.Engelberger) که یک مهندس با سابقه بود، صورت گرفت. در این ملاقات آنها به بحث در مورد داستان آسیموف پرداختند. ایشان سپس به موفقیتهای اساسی در تولید روباتها دست یافتند و با تأسیس شرکتهای تجاری، به تولید روبات مشغول شدند. انگلبرگر شرکت Unimate برگرفته از Universal Automation را برای تولید روبات پایه گذاری کرد. نخستین روباتهای این شرکت در کارخانه جنرال موتورز (General Motors) برای انجام کارهای دشوار در خودروسازی به کار گرفته شد. انگلبرگر را "پدر روباتیک" نامیده اند.
-
دهه 1960 م: روباتهای صنعتی زیادی ساخته شدند. انجمن صنایع روباتیک این تعریف را برای روبات صنعتی ارائه کرد:
"
روبات صنعتی یک وسیلة چند کاره و با قابلیت برنامه ریزی چند باره است که برای جابجایی قطعات، مواد، ابزارها یا وسایل خاص بوسیلة حرکات برنامه ریزی شده، برای انجام کارهای متنوع استفاده می شود."
-
سال 1962 م: شرکت خودروسازی جنرال موتورز نخستین روبات Unimate را در خط مونتاژ خود به کار گرفت.
-
سال 1967 م: رالف موزر (Ralph Moser) از شرکت جنرال الکتریک (General Electeric) نخستین روبات چهارپا را اختراع کرد.
-
سال 1983 م: شرکت Odetics یک روبات شش پا ارائه کرد که می توانست از موانع عبور کند و بارهای سنگینی را نیز با خود حمل کند.
-
سال 1985 م: نخستین روباتی که به تنهایی توانایی راه رفتن داشت در دانشگاه ایالتی اهایو (Ohio State Uneversity) ساخته شد.
 
سال 1996 م: شرکت ژاپنی هندا (Honda) نخستین روبات انسان نما را ارائه کرد که با دو دست و دو پا طوری طراحی شده بود که می توانست راه برود، از پله بالا برود، روی صندلی بنشیند و بلند شود و بارهایی به وزن 5 کیلوگرم را حمل کند
روباتها روز به روز هوشمندتر می شوند تا هرچه بیشتر در کارهای سخت و پر خطر به یاری انسانها بیایند.  

                        

                              
قانون روباتیک مطرح شده توسط آسیموف:

.1-
روبات ها نباید هیچگاه به انسانها صدمه بزنند.
.2-
روباتهاباید دستورات انسانها را بدون سرپیجی از قانون اوّل اجرا کنند.
.3-
روباتها باید بدون نقض قانون اوّل و دوم از خود محافظت کنند.
مزایای روباتها:
.1-  
روباتیک و اتوماسیون در بسیاری از موارد می توانند ایمنی، میزان تولید، بهره و کیفیت محصولات را افزایش دهند.
.2-  
روباتها می توانند در موقعیت های خطرناک کار کنند و با این کار جان هزاران انسان را نجات دهند.
.3-  
روباتها به راحتی محیط اطراف خود توجه ندارند و نیازهای انسانی برای آنها مفهومی ندارد. روباتها هیچگاه خسته نمی شوند.
.4-  
دقت روباتها خیلی بیشتر از انسانها است آنها در حد میلی یا حتی میکرو اینچ دقت دارند.
.5-  
روباتها می توانند در یک لحظه چند کار را با هم انجام دهند ولی انسانها در یک لحظه تنها یک کار انجام می دهند.
معایب روباتها:

.1-  
روباتها در موقعیتهای اضطراری توانایی پاسخگویی مناسب ندارند که این مطلب می تواند بسیار خطرناک باشد
.2-  
روباتها هزینه بر هستند.
.3-  
قابلیت های محدود دارند یعنی فقط کاری که برای آن ساخته شده اند را انجام می دهند

 

+ نوشته شده در ساعت توسط ...

آشنائی با LCD(نمایشگر ال سی دی ساده  چیست؟)


LCD ها ابزاری برای نمایش اطلاعاتی هستند که شامل حروف و اعداد و همچنین برخی کاراکترهای گرافیکی می شود. بطور معمول در تجربیات اولیه در نمایش اطلاعات دیجیتال از نمایشگر های هفت قسمتی (seven segment) استفاده می شود که این نمایشگرها فقط ارقام (0 تا 9) و بعضی حروف مثل A b C را بصورت نه چندان زیبا نمایش می دهند. اما با بکار گیری LCD اطلاعات را بصورت زیبا و کاملتر می توان نمایش داد. البته استفاده از LCD برای مدارات ساده توصیه نمی شود و عموما آنرا همرا با میکروکنترلر یا CPU ها بکار می برند.
چیزی که از آن بعنوان LCD یاد می شود درواقع یک صفحه نمایشگر LCD مانند صفحه ماشین حساب است که همراه با آی سی کنترلر و مدارهای جانبی اش و عموما با لامپ پشت صفحه در یک بسته پیش ساخته عرضه می شود.

همانطور که گفته شد LCD دارای یک کنترلر است که با فرستادن اطلاعات به آن این اطلاعات را در صفحه ای که عموما به چند سطر و ستون تقسیم شده نمایش می دهد. مثلا برای نمایش حرف "M" کافیست کد اسکی این حرف را طبق یک پروتکل ساده به LCD ارسال کنیم. همچنین می توان دستوراتی از قبیل پاک کردن صفحه نمایش، جابجایی مکان نما، خاموش روشن کردن مکان نما و غیره را نیز به LCD ارسال کرد.
LCD ها از طریق مقدار اطلاعاتی که میتوانند در صفحه نمایش بدهند انتخاب و خریداری می شوند. انواع معمول آن عبارتند از 16 ، 20 ، 32 و 40 کاراکتر در هر خط در 1 یا 2 یا 4 سطر. مثلا 2 در 16 یعنی صفحه دارای دو خط و هر خط 16 کاراکتر است. همچنین LCD موردنظر میتواند همراه با لامپ پشت صفحه (Back light) یا بدون آن انتخاب شود. LCD ها کاراکتر ها را در ماتریس های 5x7 pixel نمایش می دهند.



تقریبا همه LCD ها دارای 16 پایه هستند که 8 خط آن مربوط به فرستادن یا خواندن داده ها یا دستورالعمل ها می باشد. پایه های دیگر خطوط کنترل و ولتاژهای تغذیه می باشند. لیست کامل خط ها بقرار زیر است



شماره و نام خط عملکرد

1- Vss زمین
2- Vcc ولتاژ 5 ولت برای کنترلر
3- Vee ولتاژ تنظیم درخشندگی(contrast)
4- RS انتخابگر ثبات دستور / داده
5- RW انتخابگر خواندن / نوشتن
6- Enable فعال کننده
7-14 Bus 8 خط گذرگاه داد یا دستور
15- ولتاژ 5 ولت برای لامپ پشت صفحه
16- زمین برای لامپ پشت صفحه

Vee : برای تنظیم درخشندگی کاراکترها بکار می رود که باید ولتاژی بین صفر و 5 ولت به این پایه اعمال نمود. برای بیشترین درخشندگی این پایه را به زمین متصل کنید.

انتخابگر ثبات داده / دستور مشخص می کند که چه چیزی به LCD فرستاده می شود. اگر این خط صفر باشد کنترلر LCD بایت موجود روی خطوط 7 تا 14 را بعنوان یک دستور تلقی کرده و اگر این پایه یک باشد اطلاعات را بعنوان یک کد اسکی که باید کاراکتر معادل آنرا نمایش دهد در نظر می گیرد.

انتخابگر خواندن / نوشتن جهت اطلاعات را نشان می دهد. اگر این پایه صفر باشد اطلاعات به LCD ارسال می شود و اگر یک باشد عمل خواندن از LCD صورت می گیرد.

فعال کننده: برای هر دستور یا داده ای که به LCD میفرستیم یا میخواهیم از آن بخوانیم باید یک پالس پائین رونده (یعنی تغییر از سطح یک به صفر) را به این پایه اعمال کنیم تا دستور یا داده بوسیله کنترلر LCD پردازش شود.

در خطوط 7 تا 14 خط 7 کم ارزشترین بیت(LSB) و خط 14 پر ارزش ترین بیت (MSB) می باشد.

در صورت تمایل به روشن کردن لامپ پشت صفحه ولتاژ 5 ولت را به پایه 15 اعمال و پایه 16 را به زمین متصل می کنیم.

برای آزمایش می توان LCD را به پورت چاپگر متصل و اطلاعاتی را به آن ارسال نمود. در این حالت بطور معمول خطوط داده پورت به خطوط 7 تا 14 و سه خط کنترلی به پایه های 4 تا 6 اتصال داده می شود توجه داشته باشید که ولتاژ تغذیه و لامپ پشت صفحه LCD توسط منبع خارجی تامین می شود.

روش فرستادن یک کاراکتر:
خط خواندن نوشتن را صفر کنید تا نوشتن انتخاب شود.
خط داده / دستور را یک کنید تا داده انتخاب شود.
کد اسکی کاراکتر مورد نظر را روی خطوط D0 تا D7 قرار دهید.
خط انتخاب را ابتدا یک و سیس صفر کنید. حداقل 450 نانو ثانیه باید این خط را صفر نگه دارید تا داده پردازش شود. بعد از آن حالت خط تاثیری نخواهد داشت
+ نوشته شده در ساعت توسط ... |