Questions related to Computer Science
- Aug 26, 2020
We want to have 10,000 sentences judged according to emotional criteria (valence, arousal, etc.). Computer science often uses a few raters (2-5). In other disciplines ten or more raters are used (e.g. psychology, linguistics).
The sentences originate from several technical languages. In addition to teaching materials, they include general discussions during the study of these disciplines. The sentences usually range from medium difficulty to purely technical language.
Lionel Nicolas Thank you for your most feard answer! In fact, I suspect that there are no valid recommandation for a simple number.
I suppose, there may be some statistical reasons for a higher number of raters.
So, I will follow your advice, and at the same time keep the eyes open for a minimal necessary number of raters.
- Aug 27, 2020
Please share your thoughts on teaching the "Introduction to Programming" course or teaching in general. Any suggestion/experience is appreciated.
For introducing programming, it's critical to focus on the goal and outcomes of the class, which is to show the students that:
- It's just a machine that will do what it is told
- They can tell it what to do
- You don't need to know a lot of stuff to make it do some pretty powerful things
- Breaking it is fun!
At the end of an introductory course, the students should understand:
- The basic elements of a computer and how they work together to execute a program (CPU, memory, disk, and network) with some very simple descriptions. This provides context for everything else and it's surprising how many entry-level developers don't really know this
- Getting and setting variables, printing stuff out, and console I/O. This lets the students see that they can make the machine do stuff, even if it's just printing out what they typed in. This gives a feeling of control and opens up the possibilities.
- Arithmetic and string operators, so you can show them how to mess with the variables
- Basic flow and flow control (if/then; case/switch). Now the program can make decisions!
- Loops (for, while). Now it can do boring stuff over and over again!
- Functions and parameters. This shows them how to isolate repeated code and is a good place to introduce the DRY (Don't Repeat Yourself) principle. Also is a setup for learning how to make network-based calls, but that should come later.
- Data structures (basic). Ways to represent data as a bundle. Very dependent on the language used, but very important.
- Recursion. This breaks a lot of people, but is the only way to efficiently solve some problems. Don't be too harsh on people that don't "get it"
- Pointers. Understanding the difference between reference and value is critical in so many languages, even those that don't have "pointers" per se like Java, but still have places where values and references are different.
Avoid object-oriented programming for an introductory course. You can use an OO language, just avoid the concepts, because you'll waste days talking about encapsulation, data hiding, public vs. private, and all that junk, which gets in the way of actually making the machine do stuff.
- Aug 22, 2020
I am graduate from other major, and now working in computer science and technology, my roomates are mostly researching on image recognition. I want to switch to it and learn how to cooperate with them. But I am still out of way.
Please help me how to gain the access to image recognition. I am skilled at nature inspired algorithms now. I have read some papers on image segmation but I don't know what to do. If you are skilled, could you show me a way?
Dear Zheng-Ming Gao ,
2021. Advancements in machine learning and use of high bandwidth data services is fueling the growth of this technology. Companies in different sectors such as e-commerce, automotive, healthcare, and gaming are rapidly adopting image recognition. According to the report by MarketsandMarkets, the image recognition market is divided into hardware, software, and services. The hardware segment dominated by smartphones and scanners can play a huge role in the growth of image recognition market. There is an increasing need for security applications and products with innovative technologies such as surveillance cameras and face recognition.
- Jul 27, 2020
This Special Issue will focus on control, modeling, various machine learning techniques, fault diagnosis, and fault-tolerant control for systems. Papers specifically addressing the theoretical, experimental, practical, and technological aspects of modeling, control, fault diagnosis, and fault-tolerant control of various systems and extending concepts and methodologies from classical techniques to hybrid methods will be highly suitable for this Special Issue.
Potential themes include, but are not limited to:
Modeling and identification
Adaptive and hybrid control
Adaptive and hybrid observers
Reinforcement learning for control
Fault-tolerant control of systems based on various control and learning techniques
Prof. Dr. Jong-Myon Kim
Prof. Dr. Hyeung-Sik Choi
Dr. Farzin Piltan
this is a nice intersection
- May 29, 2019
Kindly, suggest me some SCIE, ESCI or SCOPUS indexed computer science journals which are paid but fast response?
Kindly Suggest me any below 500$ paid SCIE journal with monthly or bimonthly publications..
- Aug 13, 2020
In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
- Dec 7, 2018
recently i prepared a research work for publication in IJCSSE. The site has been shut for weeks now. I feel the journal could be predatory .
This journal has no IF, is not on the SCIe list.
- Jul 27, 2020
I understand vaguely that the first author is supposed to be the one who "did the most work", but what counts as "work" in this comparison? Does "most" mean "more than all the other coauthors together" or just "more than any other coauthor"? What happens when the comparison is unclear? How often is "did the most work" the actual truth, versus a cover story for a more complex political decision?
I realize that the precise answer is different for every paper. I'm looking for general guidelines for how an outsider (like me) should interpret first authorship in your field. Pointers to guidelines from journals or professional societies would be especially helpful.
Welcome Frank T. Edelmann
Another important attached concept is:
1. Principal Investigator (PI) usually after the name has ** PI
2. Co-Principal Investigator or Co-Investigator (Co-PI/Co-I)
3. Faculty Participant
Traditionally, the last author position is reserved for the supervisor or principal investigator. As such, this person receives much of the credit when the research goes well and the flak when things go wrong.
Multiple “first” authors. Additional “first” authors can be noted by an asterisk or other symbol accompanied by an explanatory note. This practice is common in interdisciplinary studies; however, as we shall explain further below, the first name listed on a paper will still enjoy more visibility than any other “first” author. https://wordvice.com/journal-article-author-order/
Multiple “last” authors. Similar to recognizing several first authors, multiple last authors can be recognized via typographical symbols and footnotes. This practice arose as some journals wanted to increase accountability by requiring senior lab members to review all data and interpretations produced in their labs.
For details please read attached PDFs
- Jun 26, 2020
If somebody wants to mathematically model data, information, and knowledge. Data represents a raw material for processing service delivery solutions to produce information. Knowledge acquired by handling such information by experts in a special field such as computer science, psychology, mathematics, and statistics. How can mathematical models be developed to describe knowledge acquired by individual, population, or community?
@George Stoica. Your papers are very helpful for me to taking surveys
- Jul 28, 2020
I have to find out what software and operating systems are used in education. More into lower education than in high and in different countries (USA, Australia, Great Britain, Germany, France, Scandinavia, Romania, Hungary, the Czech Republic, and other mostly European countries). Do you have some idea where can I find such statistical data?
Please read this document, I hope it may be helpful to you.
- Mar 27, 2020
I'm looking for Ph.D. programs (Scholarships) in Europe/USA/Canada/Australia/Great Britain.
Professors who are looking for Ph.D. candidate, I'm ready to work with any new subjects in Computer Science Field and especially in Deep Learning/Machine Learning.
I really appreciate your help!
See the attached file
- Feb 20, 2019
In which scientific studies you run or plan to run would be artificial intelligence helpful?
Dear Kjartan Skogly Kvers?y, Anders Norberg, Ronit Kuldip Nandeshwar, Alexander Osherenko, Syed Furqan Qadri, Richard Collins, Preston Guynn, Thank you very much for your answer and participation in the discussion. Thank you for an inspiring, interesting and substantively rich answer. I also believe that artificial intelligence technology is currently finding more and more applications in various branches and sectors of the economy. Best regards, Have a nice day,
- Aug 3, 2020
Hi, as part of my bachelor thesis on the design of programming languages for teaching mathematics in the 21st century, I have planned to discuss the evolution of (the) major programing languages which focus on the idea that computer programming could play an integral role in STEM education.
In order to analyze different programming languages as a framework for teaching (primarily) mathematical concepts, I am currently searching for (citable) research projects providing insights into the historical development of educational programming languages. – Are you familiar with any research on the evolution of educational programming languages?
Many thanks in advance for your contributions,
YES Tobias ... do not forget paradigms ... they can help you in your investigations.
Precisely, look at the indexes of use of programming languages (tiobe index for example) ... the evolution of educational programming languages must be particularly interested in the paradigms of these programming languages.
How to explain the rank of the C language? (procedural paradigm ..) Why python is widely used? (object paradigm). is it the object paradigm that explains the use of this language !? or quite simply, it is more used by non-computer scientists where the principles of the object paradigm and strong typing are completely ignored?
Good luck !
- Jul 25, 2020
Hello everyone, could some people suggest a good syllabus for graph theory and discrete mathematics for Computer science - Network department, please.
Thank you in advance.
- Oct 8, 2019
Which Q1 and Q2 research journal of computer science and cybersecurity area journal are most suitable for speedy review and publication process preferably not the paid journal?
Artificial Intelligence Review
- Jun 28, 2020
- Jun 20, 2020
I am trying to make an NN for meteorological prediction, for which I have input data of only one meteorological station. I have target data of more than one location.
I have to train the NN in such a way that I gove it two input values (e.g.current temperature, pressure) and want to obtain temperature output at more than 50 locations and more than one time steps.
e.g. the input :
should give output:
temperature after 2 hrs = 25, 26, 28, 29, 27.5
temperature after 4 hrs = 24, 23, 26, 26.5, 27
In which pattern to arrange the input and target data, and how to change the number of NN output nodes to obtain these results?
Which programming language, you are using?
- Jun 30, 2020
Hello Researchers, I would like to know some Q1 paid journals with fast publication in the field of computer science major?
You can check more from this link
- Jul 31, 2018
One of important aspects of a researcher career is searching for funding of his/her scientific projects. What kinds of projects in computer science are likely to be supported? What are the most important aspects in a project proposal? What are the best opportunities for a young scientist?
Getting financial aid is hard because company giving it is private. Some institute also gives financial aid.
- Apr 6, 2019
- Feb 28, 2019
Do we loose information when we project a manifold.
For example, do we loose information about the manifold i.e. Earth (Globe) when we project it to a chart in the book (using maybe stereographic, mercator or any other method)
Similarly, we should be loosing information while we create a Bloch sphere for a 2 state system in Quantum Mechanics which is also a Projected space from a higher dimension i.e. 4 dim.
Also, is there a way to quantify this information loss, if there is any?
You lose the global information. Examples of global information are the characteristic classes of the manifold "M". A concrete one is the Euler characteristics "e(M)". This number captures different important information of the manifold. Just to exemplify:
- it describe (approximately) the behavior of any vector field around its singularities. This means that if the Euler characteristics is not zero, then every vector field must be a singularity (this is a result known as Poincaré-Hopft theorem). For the Earth, we have e(M)=2, so that vector fields must have a singularity. But if we take the vector field describing the wing, this means that it must be zero in some point. Thus, at each instant of time there is a point in the Earth in which the wind does not blow! And since this is a global information, you could not conclude it using a chart.
- May 28, 2020
What are the primary real problems in your' specific field need to solve based on these two parameters? ex., if this sentence becomes as truth in computer science need :
1. Increase interest in preparing people who are able to develop robots from two sides hardware and software
2. Open a new department of a robot in all colleges of science and information technology
3. Convert the lab from the virtual environment into the real environment this meaning reduce the theoretical part and increase the particle parts
[ If you want to progress, you must put forward solutions for the obstacles you will encounter tomorrow]
Dear Dr Al Janabi: In principle we may have to adopt an Automaton Way of Life, which will be unfortunate!! Robots and AI taking over , will be end of human civilization as we know it!!
- May 18, 2020
I do not know much about game theory. I am finding a book where different protocols and algorithms of computer science have been analyzed using the concept of game theory.
John Von Neumann is the father of game theory and he has published a book on game theory. When you go through it, you will understand the subject very well.
THEORY OF GAMES AND ECONOMIC BEHAVIOR
- Jul 5, 2017
I need these data-sets or any other alternative to have title, abstract, body text and references??
I need a data set that contain the research papers from different domain.......... is there any dataset exist on kaggel or any other trusted source..
- May 13, 2020
Because I have to publish two papers
try IEEE Access is a peer-reviewed open-access scientific journal published by the Institute of Electrical and Electronics Engineers. It covers all IEEE fields of interest. Impact factor: 4.098 (2018)
- May 17, 2018
Matrix plays an important role in computer science. It is useful in various computer programs, in projecting a three-dimensional image onto a two-dimensional plane.
Probably the formula of least square projection of a vector unto a matrix. Please check it on Khan academic.
- May 4, 2020
Dear Professor Sir/Madam
My name is Hagos Meles. I have BSc. Degree in Computer Engineering. I have been worked in database and GIS (geographical Information System) related fields in the past 5 years. Now I am studying master's degree in computer science in Huazhong University of Science and Technology. In the next semester I have to submit my proposal for my research area.
In Asmara (capital city of my country Eritrea) cadaster office database management system have not Geodatabase to include geographical information of the building, roads and other infrastructure of the city. Due to this it is difficult to find the exact location of a specific building or other infrastructure. I want to design geodatabase for this cadaster office as my master’s research topic to overcome the above problem. So what would you suggest me? Is this enough research topic for master’s degree in our school Computer Science? Thank you for your time.
You have a good idea. Now you have to formulate it into a research question. For example, "How does Cadaster DBMS perform with Geo-database?".
2. Then you can list a number of performance metrics against which you can compare your proposed system (i.e. cadaster with geo-database) to the existing system. You can get the appropriate metrics from literature review of similar works.
3. Then develop your system and test it against the metrics in (2) above. Similarly test the current system against the same metrics.
4. Then compare the two system and conclude whether your proposed system is better, or where are the strength and weaknesses of your proposed system compared to the current system.
- May 4, 2020
Hello All Good morning as we all know we are going through a very tough time. All companies are closing doors.I am trying to connect the following points understand the nature and role of psychology in understanding mind and behaviour, state the growth of the discipline in developing computer softwares and nlp based courses. Understand the different fields of psychology, its relationship with other?disciplines, and professions, and?appreciate the value of psychology in daily life to help you understand?yourself and others better.
This kind of course will have the following benefit.
In lockdown mode to provide online therapy and conditioning for those who are disintegrated.
Identify the goals and help them understand technology better.
Identify the criteria for work and data science.
Identify twitter feeds.
Secondly we would like to create a powerpoint presentation why the students would like to have Computer science with physchology as a course.
Following points are needed why will we choose CSE with physchology as a course?
What are the market forecast capabilities and need for specilaization
Examples of the job prospects.
How to select the career choices.
What will be the elective to study the same
I think it will be valuable to deliver a course in computer science and psychology. Moreover, it could address a massive audience if it presents information on managing mental health in a computer-driven work environment. There are many implications on the general well-being of a person working in front of a computer majority of their time. The following two articles contain some critical information on this topic.
A the main objective in designing the content should be to address a large audience from different disciplines.
- Nov 4, 2019
Big data and machine learning, can both be implemented?
There were some interesting and some important comments so far.
But do not forget:
you have to do something you are interested in (as you have to do this for a few years and may be beyond)
AND have skills to do (i.e. ability to complete)
AND, BTW, you need to have a supervisor (who you can work with - and who is willing to work with you for years) - thus, that person have to accept your topic
- Apr 28, 2020
The question of how computers can contribute to controlling the COVID-19 pandemic is being posed to experts in artificial intelligence (AI) all over the world.
AI tools can help in many different ways. They are being used to predict the spread of the coronavirus, map its genetic evolution as it transmits from human to human, speed up diagnosis, and in the development of potential treatments, while also helping policymakers cope with related issues, such as the impact on transport, food supplies and travel.
But in all these cases, AI is only effective if it has sufficient examples to learn from. As COVID-19 has taken the world into unchartered territory, the "deep learning" systems, which computers use to acquire new capabilities, don’t necessarily have the data they need to produce useful outputs.
The above preprint from my colleague provides many criteria on this topic.
- Mar 6, 2020
Mac OS vs Linux vs Windows??
I personally use MacOS but would like to know what other people use for their research work, preferably researchers associated with Computation work. If possible do let me know the reason. This is just a survey.
im using Linux and windows. both of them completting my job I cant do all the job with one of them. in general for docking and simulation Linux is faster and easier to use but for visualation and analyze the results I cant do it without windows.
- Apr 14, 2020
I've used the ATHI scale to investigate attitudes towards 顺心彩票less across two disciplines: Social Work and Computer Science. It has been validated. However, the scale only uses 11 items intotal which makes me think this may be one reason. Another reason is that I am simply doing the analysis wrong. Can anyone help?
The ATHI scale has 4 sub-scales (Personal Causation, Societal Causation, Affiliation and Solutions). When analysing the PC sub-scale I get a Cronback's alpha of .47. This subscale only has 3 items. Also, am I supposed to split these between subjects? If so, for social work students the Cronback's alpha is .484 and for computer science students the Cronback's alpha is .525.
Help is apprecaited.
When running a Cronbach's Alpha, you want to aim for a coefficient score of 0.70 or higher. However, it is difficult to get a good score when the scale you are using has a low number of items. Assuming that you are using SPSS to run your Cronbach's Alpha, you should go back in, go to analyze, then scale, then reliability assessment, then click on Statistics. Here you will be able to check the box to see which sub-scale you need to remove to get a score of 0.70 or higher. Once you find that sub-scale, simply remove it from your data set, and run your Cronbach's Alpha again. Good luck!
- Apr 10, 2020
I am currently writing thesis for my undergraduate where i need to design a machine learning model to estimate the soil data between 2 borehole based on the bore log data. Sorry in advance if i have terminology mistake as I am a computer science student.
Hi Kent Low
Hope these articles may help you:
- Oct 20, 2013
Some journal listed as good journal in SCImago Journal & Country Rank (http://www.scimagojr.com) with relatively good h-index for example Journal of Computer Science (from Science Publication) is identified as possible predatory Journal in Beall's list. Which one I should follow?
I agree with Víctor Herrero-Solana
"Scimago just include all the active journals in Scopus database "
and in my opinion, Scopus is not a whitelist. Journal in Scopus can still be predatory, and predatory Journal in scopus can be delisted. So beware, if your article got accepted as it is without peer review!!!
- Apr 6, 2020
The occurrence of a new, extremely pathogenic betacoronavirus, SARS-CoV-2 (2019-nCoV), is responsible for the CoVid-19 pandemic health emergency. Accordingly, SARS-CoV-2 represents a serious global health warning characterized by high mortality, a high contagion rate, and a lack of clinically approved drugs and vaccines. In order to find safe and effective therapeutic options to treat this infectious disease, computer science could play an extremely relevant role to better understand the virus pathogenic mechanism as well as to propose novel therapeutic strategies. Accordingly, due to progress in computer science, in silico methodologies in medicinal chemistry, pharmacology, biology, genetics, and virology cover relevant tasks in modern research in these fields. Furthermore, due to the current global health warning, such computational techniques could speed up research in order to provide innovative and targeted approaches to fight the coronavirus emergency. In light of this, this Special Issue will highlight progress in terms of drug discovery, virus biology, and epidemiology to provide researchers with the most innovative computer-driven methodologies for fighting SARS-CoV-2.
For this Special Issue of Computation, we invite researchers in the fields of computational drug discovery (including drug repurposing approaches), computational biology/genetics, virology, bioinformatics, and epidemiology to submit original research, short communications, and review articles related to the use of computation to fight SARS-CoV-2.
Dr. Simone Brogi Prof. Vincenzo Calderone Guest Editors
Tante grazcie. This is most valuable to bring together best minds to seek solutions to monitor, heal and prevent the COVID-19.
Tanti auguri. Eduard
- Mar 24, 2020
I know that most of you have good experience with writing a strong research paper with novelty and originality of the ideas and results. I want some strategies and steps to start writing again because, since my master's thesis, I have not written something quite strong.
Therefore, please I need your following strategies, not links because I could find a lot of links. However, I'm asking the experts in my field (Computer Science) and especially in Deep Learning!
Thanks all in advance!
First, it is important you identify your area of interests within your field. Then, download related articles on the identified area. Study the articles and see which of the papers attracts your interests the more. Study the paper, download about three to four more of the papers from high profile publishers. Using one of the papers as a template, make attempt to start writing yours and download more materials as the need arises during the course of putting your writings together. Note that, when you start writing at first, you're merely making a skeletal draft that would need revisit to shape accordingly. Do that and be sure to use one of the reference managers for ease of your writing especially in managing your citations. Be sure to go from review to technical papers (high profile recent journals) before studying the technical aspects from books, internet sources or other related materials.
- Dec 30, 2017
What are the common points between Data Mining and Swarm Computing?
Data mining used for extraction of hidden information from large databases while Swarm intelligence are artificial intelligence techniques used with machine learning. data mining can use some artificial intelligent techniques like swarm computing methods
- Mar 27, 2020
please i need suggestions about a research based topic that would help me achieve a great result for my dissertation
- Mar 29, 2020
I hope you are healthy and safe during this quarantine.
For any machine learning model, we evaluate the performance of the model based on several points, and the loss is amongst them. We all know that an ML model:
1- Underfits, when the training loss is way more significant than the testing loss.
2- Overfits, when the training loss is way smaller than the testing loss.
3- Performs very well when the training loss and the testing loss are very close.
My question is directed to the third point. I am running a DL model (1D CNN), and I have the following results: (Note that, my initial loss was 2.5)
- Training loss = 0.55
- Testing Loss = 0.65
Nevertheless, I am not quite sure if the results are acceptable. Since the training loss is a bit high (0.5). I tried to lower the training loss by giving more complexity to the model (Increasing the number of CNN layers and MLP layers); however, this is a very tricky process as whenever I increase the complexity of the architecture, the testing loss increases, and the model easily overfits.
Finally, to say that our model performed very well, should we get a low training loss (say less than 0.1) or my case is still considered good too?
I look forward to hearing from you,
Thanks and regards,
That seems quite close really.
If you want to really get them closer, you could add a Dropout/SpatialDropout layer, which would help prevent overfitting.
- Jan 9, 2020
Please if you state one in answer do provide
What are pros and cons of each?
Have you applied what is your experience.
What is funding limit?
Is it every year or twice a year?
Range of project funding duration?
Competitiveness of funding?
Is a one university based or a collaborative kind of funding?
There is a European grant but is very hard to get. you can check more information on the link below
- Mar 13, 2020
- Mar 8, 2020
If I am going to submit my work to a CS conference, and also make my Python code available publicly, is it absolutely necessary that I set seed in my Python code?
I ran an experiment, but I realized that I forgot to set seed. Will my publication be rejected at the conference because the seed is not set?
You do not have to rerun the experiment. A referee or a researcher or other reader will want to. They cannot so precisely without the seed which you apparently forgot to set. What is the default seed? 0000? 1234? 3141? Send an email with a corrigendum immediately to the conference organisers stating the randomiser was not seeded, so the run can never be exactly replicated. As it is a conference paper and not a journal paper, they may let you get away with this corrigendum.
- Mar 8, 2020
If we have a problem and we build four methods to solve it, should we say we build a model that contains four methods ? or should we use another word instead of model ?
They are almost the same, however., you have to use it according to the fitness/coherence nature, style, and style of your manuscript. However., the tiny differences still arises with respect different situitions. For example:
1) Model may be mathematical/statistical/physical/nominal/real/exact/approximate/optimal/maximal/minimal etc.
2) Approach may be some way of data processing, computation, scaling, idea, technique etc.
3) Method may be one of the chosen procedure for data processing, computation, calculations etc.
4) Algorithm is just step by step procedure for computation, calculation, computer or programming coding etc.
- Mar 6, 2020
I have the following situation: I have a paper X about topic Y. For paper X I did a forward search with Web of Science (checking all new papers which cite paper X). Then I have downloaded all articles I have identified via forward search (approx. 1'000 Papers). Now I would like to sort these papers according to the frequency of specific keywords used.
For example: I have found paper Z via forward search (so paper Z cites paper X which is about topic Y). Now I want to check if paper Z is also concerned about topic Y or if it just refers to it in passing. For that I search for specific keywords which correspond to topic Y. According to the frequency of the specific keywords mentioned in paper X, I want to classify it in the category "relevant" or "not relevant". Now, how can I determine the threshold for the keywords? That is, if paper X only uses the specific keyword once it is most probably not relevant to topic Y. But if it mentions the specific keyword 20 times it is probably relevant for topic Y.
Is there a recognized methodology to determine or approximate a threshold for the keyword frequency which allows to distinguish if a paper is relevant to topic Y or not?
With this approach I hope to reduce the 1'000 papers to those which are about topic Y.
- Mar 4, 2020
I have found some beautiful techniques to solve complete or partially problems such as:
1- Goldbach’s conjecture
2- Riemann hypothesis
See more in my notions:
- Mar 2, 2020
Can someone suggest a fully funded distance learning postgraduate diploma in computer science or related area? It would be a great help??
This will not be easy. The countries that still have tuition-free education are few and mostly found in Europe, you can see them here https://www.study.eu/article/study-in-europe-for-free-or-low-tuition-fees . If you come from a non-EU/EEA-country there often are fees, but sometimes lower, in these countries. For participation, you also have to qualify for a student visa for that country as a non-EU/EEA citizen, which involves showing that you have funds enough also for supporting yourself for living costs. In the category of non-tuition master programmes, there are almost no "distance educations" where you can remain in your 顺心彩票 country. For master education, however, you can look for "Erasmus Mundus" university cooperations and master programmes. They often have, as part of the concept, some student places with a grant, enabling students from third countries to study,
For PhD candidate education, there are sometimes, as in Sweden, PhD student positions that you apply for, and if accepted, you have both tuition-free education and a monthly salary and a student visa for 4 years. it is like this in some other countries as well, as in other Nordic countries, Germany, Switzerland and Belgium. See this RG discussion http://www.fondpageant.com/post/which_countries_offer_PhD_positions_as_paid_jobs_in_English
These PhD positions are advantageous, but not often "on distance", whatever that means. They can however sometimes be used in a half-flexible way. If you want a PhD "on distance", try Walden university
There is another option that may come close. The MOOC platforms have for long worked with nano-master, specialisations, micro-masters and similar course packages in interesting fields and certificates from these are often attractive for employers as a part of a CV. Not cost-free, but rather cheap, and fees are sometimes adapted after country economy. And, more and more, these platforms are offering also BSc and master programmes with ordinary credits and exams. Not free, but not very expensive, and with a lot of flexibility. begin here https://www.coursera.org/degrees and continue search other MOOC platforms as EdX, Futurelearn etc, and search them overall at classcentral.com.
There are also on some US universities online PhD and masters programmes, but they come with a cost - but sometimes cheaper than a campus option.
- Mar 3, 2020
Hi! I am student at the University of Maribor - Faculty of electrical engeneering and computer science (specifically Electronics). I want to know more about how can I merge the BDDs or ZBDDs with the electronic problems? Any concrete ideas?
Thanks, bGood ??
at first glance, nothing seriously applicable comes to mind but just for exercise, and if you don't mind reducing the wide field of electronics to ordinary resistors:
You could define functions like
fs8(1.0, 1.2, 1.5, .. 8.2) which decide whether a set of resistors in series results in a total resistance of 8 Ohm or not. The arguments are taken from the E12 series, for example. The argument "1.0" has the value 1 if a resistor of 1.0 Ohm is present, and 0 otherwise. For example, fs8(0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0) = 1 because 1.2 + 6.8 = 8, or fs8(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1) = 0 because 8.2 != 8.
For such functions, you could sketch (part of) the binary tree, then the BDD, and at last the ZBDD.
- Aug 21, 2019
Currently there is a trend to apply virtual methods, ICT based, to teaching in HE . Frequently professors face the situation, when they have been teaching in face-to-face modality and want/need to do the same, but in distance education, virtual modality. Do anybody have a practical experience, or knowing about a specific methodology for Computer Science courses?
Hi. I think you can use a lot of tools according to circumstances:
LMS (with password for your students) in case of official courses and for limited number;
Website (containing all course program and some quizz, with external links for technical tools and usefull softwares)+using e-mail/other ICT for continuous contact and eventual questions;
MOOC for fast courses for large public+using e-mail/other ICT for continuous contact and eventual questions;...
- Feb 21, 2020
My research focus on assessing an organization security culture , and really some persons advice me to use fuzzy methods , my question is that Does fuzzy AHP and Topsis are related to Computer Science because my paper must related to computer science?
AHP and TOPSIS are both the most known and effective MCDM techniques. In fact many applications of them in computer science until now. You can search for that in Journal of Computers and Industrial Engineering
- Jan 31, 2020
Please I am an Msc Computer Science and Technology student and my research area is software testing. My professor is really interested in Fault Localization but I am having huge challenges in performing experiment. Please could you recommend me to any company for intern so i can gain industrial experiences in software testing.
I am currently in China and i can travel to any country just to acquire the knowledge please.
Thanks in advance
There is no need to travel to any country to acquire the knowledge that you can easily get on internet, books
I would suggest for you to work on projects on different platforms like GitHub on which you can even collaborate and apply while learning to build your resume and gain practical experience,
You can then apply for internship on various sites that can be easily found on internet.
- Feb 5, 2020
To keep it precise and simple what topics must be included and what learning path mustbe followed for designing computer science syllabus for future.
Foremost the syllabus must be flexible to adapt the current advances in Computer Science, such as React Js and the use of Node.
Topics must include:
(1) Programming Languages such as Python extended to it's useful applications such as in the field of Data Science, AI
(2) Operating Systems: Linux and how to use virtual machines (such as Ubuntu Studio, Kali linux) and bootable pendrive such as that of Tails OS
(3) Arduino Uno , Raspberry pi to learn how to make use of the programming (software)- hardware (physical objects) interaction[for example: making VPN router or EEG for measurement and recording],
(4) Background and history of Computer Science with the inclusion of the current syllabus but giving less priority to the old concepts which are replaced by better practical ones
(5) Students must be taught how to develop their own website, mobile application, software pc apps, and many other practical applications of Computer Science
(6) AI, data Science, machine learning
(7) GitHub should gain more priority and importance
The list isn't over yet, it will never be as the future technology keeps on evolving, for example quantum technology could be the next big thing that can be studied in Computer Science,
I think, The best Syllabus must include topics which allows Computer Science students to think for themselves and create a learning attitude which will allow them to learn different subtopics such as learning about Tensorflow , opencv and how Python can be used to analyse videos.
Also the syllabus must be designed with more emphasis on practical work for which rote learning is not possible and which allows every student after having completed the syllabus to have skills that are of practical value and in demand in the market.
- Feb 5, 2020
I wonder how a machine viz computer became a stream of study with science appended with it. How it happend ? Is it a science ?
I'm now thinking more than five. Wrt liberal arts, use of computers in design of graphic arts, music/sound arts, movie arts, and art arts. Computers in medical science, urban planning, forensics, one could go on and on...
- Oct 30, 2019
If you have any academic email address, you will not need to be endorsed. So, it suffices you to register via your academic email address. If you do not have an academic email address, you have to ask someone having such a thing to endorse you. By getting the endorsement from that person you can upload your preprints to arXiv.
- Jan 5, 2020
I am looking to enter the field of Complex Systems or Complexity Science as a postgraduate in my Computer Science faculty, specifically focusing on Agent-Based Modeling. However, I am unclear on the current problems being addressed, what the research trends are, and how it may be applied in industry from a CS perspective, because there is currently no one in my faculty involved in Complex Systems. Any clarification on this issue would be greatly appreciated, thank you.
The Purdue Homeland Security Institute has been using AnyLogic ABM software to garner insights related to active shooter events, test "run, hide, fight" methodologies, and establish effective evacuation procedures for theme parks. I'm only familiar with this software, but I know there are other good ones out there. Look to see if there are libraries available for your intended use. We rely on the logic for the pedestrian libraries pretty often that AnyLogic provides.
- Jan 31, 2020
Please I am an Msc Computer Science and Technology student and my research area is software testing. My professor is really interested in Fault Localization but I am having huge challenges in performing experiment. Please could you recommend me to any company for intern so i can gain industrial experiences in software testing.
I am currently in China and i can travel to any country just to acquire the knowledge please.
Thanks in advance
Dear Adekunle Ajibode,
You can use Test Driven Development technique easily.
I would suggest you to try with this address:
- Jan 28, 2020
Hi friends, I am looking for the help to decide the topics for the PhD proposal in Data Science. I am from BPFI domain and working in Retail banking. I have had an opportunity to work some of compliance and regulatory projects e.g Digital transformation, Replatform, Data Migration, BASEL, SEPA, PSD2, CRM, GDPR and recently completed my masters in computer science (Data Analytics)
Please also help me with the challenges and post PhD future in academic as well as jobs.
Thanks in advance !
Thanks a million Martin. I am trying for a topics in which I invested time a effort at work.
- Jan 23, 2020
In general the first principle is a basic assumption that cannot be deduced any further. Related to different fields of human activity there are different definitions of first principles, for example for engineering those are the laws of physics. Often great innovation in science/engineering happens when the new idea is not build on top of the current state of the art or commonly accepted technology. Instead the problem is initiated from those first principles or in other words "what we know for sure" and re-build from there.
So, what are the first principles known so far in computer vision, particularly in object detection. Are there fundamental "can do" and "cant do" that take its roots and proofs in computer science, physics, mathematics?
Object recognition is a computer vision technique for identifying objects in images or videos. ... Using object recognition to identify different categories of objects. Object recognition is a key technology behind driverless cars, enabling them to recognize a stop sign or to distinguish a pedestrian from a lamppost.
Size, color, and shape are some commonly used features. system depend on the types of objects to be recognized and the organization of the model database. Using the detected features in the image, the hypothesizer assigns likelihoods to objects present in the scene.
Refer the following link:
- Nov 19, 2019
Hello everyone!, my team and I are working on a mobile application dedicated for promoting mental health and we need some insight.. Please take our survey and if you would like to help further you can spread it in your communities. If you have any suggestions we would love to hear them.
Machine learning algorithms could help determine key behavioral biomarkers to aid mental health professionals in deciding if a patient is at risk of developing a particular mental health disorder. Additionally the algorithms may assist in tracking effectiveness of a treatment plan. ( https://towardsdatascience.com/machine-learning-and-mental-health-7981a6001bd5 )
- Mar 14, 2019
“AIs will colonize and transform the entire cosmos,” says Juergen Schmidhuber, a pioneering computer scientist based at the Dalle Molle Institute for Artificial Intelligence in Switzerland, “and they will make it intelligent.”
What do you think? do you think the AI will change everything in the life? do you believe that the AI will become a threat for human or not?
and if yes, how near is that day?
Dear Colleagues and Friends from RG,
The above discussion inspired me to formulate the following question:
How can artificial intelligence evolve into artificial consciousness in the future?
On the basis of the above considerations and conclusions from the discussion on interesting issues discussed, I formulated the following thesis that the unlimited development of artificial intelligence may lead in the future to the creation of artificial awareness by combining cybernetics, ICT information technologies, bioinformatics, cyberbergetics, neuroinformatics and through the development and implementation of achievements from range of machine learning technologies and other advanced data processing technologies Industry 4.0.
Below I have described the key determinants confirming the formulated research thesis. To the above discussion I would like to add the following conclusion formulated as a summary of my previous considerations on this topic: Axiomatic and technological implications of the possibility of building artificial awareness.
For several years, artificial intelligence researchers and scientists have been discussing the axiomatic and technological implications of building artificial consciousness, one of whose goals is to seek the answer to the question: Will artificial intelligence evolve into artificial consciousness in the future?
Technologies of advanced data processing Industry 4.0, including above all Learning machines and Artificial Intelligence is also used in the attempt to build machines equipped with the ability to self-improve the performed tasks and programmed activities.
Perhaps in the future there will be an attempt to build artificial awareness in which supercomputers will be equipped. In my opinion, consciousness can only be mathematically modeled in theory. Even if a mathematical model of artificial consciousness were built using ICT and Industry 4.0 and in the future Industry 5.0 and based on this model artificial intelligence would be created in quantum computers installed e.g. in autonomous robots, androids, it will still be only artificial intelligence without emotions and the essence of human consciousness. An analysis of the nature of human thoughts is necessary to distinguish between human intelligence and various artificial intelligence technologies being developed.
In advanced computerized systems of neural networks, artificial intelligence systems are created, whose task will be to solve tasks consisting of complex sequences of many algorithms and self-learning systems for solving complex problems with the help of many algorithms. In these systems, man will try to create a structure that solves complex analytical tasks and learns from his mistakes. The advantage of artificial intelligence systems over their creator, i.e. man, is to rely on a much smaller number of mistakes made during repeated processes of solving complex tasks and learning new complex formulas to apply specific increasingly complex algorithms. However, after developing these artificial intelligence systems and applying them in many computerized fields of modern economies, what will be the next stage of technological progress in this field?
Therefore, will the age of artificial intelligence and artificial consciousness come after the age of artificial intelligence? In my opinion, this is impossible. In my opinion, despite the rapid progress in the development and creation of new generations of artificial intelligence, it will never be possible to create an artificial creation that can be the equivalent of human intelligence taking into account human emotional intelligence and the specifics of human thoughts, human consciousness, human feelings. Therefore, the thesis can be formulated that in some respects artificial intelligence will probably never match human intelligence. The machine will be able to solve very complex problems and tasks but will not know why it does it, who it is, in what world it operates, it will not be able to realize its existence in the Universe etc. Machines in the form of autonomous androids can perform physically difficult works that a man cannot is able to perform.
Quantum computers equipped with Big Data Analytics will be able to solve analytical tasks many times faster than the most powerful human minds. However, they will not be aware of their existence. Human awareness of its existence has been evolved in millions of years of evolution of the human mind and also of human ancestors that preceded the human being, i.e. human-like primates belonging to primates. Human consciousness was created in a process of evolution lasting millions of years, during which the process of continuous interaction of a complex biological organism with the environment has evolved. While artificial intelligence is based on systems of neural networks in a simplified way, to a small extent mimicking the human central nervous system and the computational power of performing specific elementary tasks exceeding the analytical abilities of a human being, however, the level of complexity of the living organism of mammals is still many times higher than the most advanced computers.
In line with the above, in my opinion, the unlimited development of artificial intelligence may lead to the creation of artificial awareness in the future by combining cybernetics, ICT information technologies, bioinformatics, cyberbergetics, neuroinformatics and through the development and implementation of technology achievements learning machine and other advanced data processing technologies Industry 4.0.
In view of the above, the following question arises:
Will artificial neural structures become such advanced artificial intelligence that artificial consciousness will arise? Theoretically, you can consider this type of projects, however, to verify it realistically, you would need to create this type of artificial neural structures. Research on the human brain shows that it is a very complex and not fully understood neural structure. The brain has various centers, areas that manage the functioning of specific organs and processes of the human body. In addition, consciousness is also complex and consists of elements of emotional, abstract, creative intelligence, etc., which also function in separate sectors of the human brain.
- Dec 25, 2019
I'm looking for journals that are indexed in Scopus and Clarivate that may accept articles monthly for free of cost!!
- Jan 3, 2020
my name is Panagiotis Koilakos and I am studying computer science in AUEB, Greece.
During an HCI(Human-Computer Interaction) course, we had to select a 顺心彩票 device and try to beautify it through a better UI.
This is the final evaluation of the project and we would like some people to run the app and answer to a short questionnaire (we kept it short, promise!).
Here ( https://docs.google.com/forms/d/e/1FAIpQLSctCd4jPMu1C1GA9V1dJevKzqrKsfUXfPpAqMYoAE2_Ci2fdw/viewform ) you may find the questionnaire (.exe can be downloaded from there - not the best option...I know).
Thanks very much anyone that spares 10 minutes to give a helping hand on finalizing this
Kindly elaborate sir
- Jan 4, 2020
I feel very happy to tell you ,I am interested in networking,i would like to continue my research in the field of wireless networks.i need some suggestion regarding where i need to start my journey in the field of computer science and networking.i am also thankful to you if you suggest present research trends and evolution .
If you want to work on the Vehicular communicationnetwork system. then go for the VANET, and work on the VCC (Vehicular cloud computing)
- Feb 20, 2018
In layman terms, Artificial Intelligence is an area of computer science where computers are developed to behave much the way as humans do. The research topic will aim to illustrate the various levels of AI in various companies implemented till date, the future projects that may impact and discussion of the argument of whether AI will 'support' HR Industry or 'replace' the current workforce, till what extent will the HR Industry be impacted globally. Which skills can be automated and which cannot be automated are a few things which i want to reach at the end!
HR (Human Resource) Industry will be HR ( Human Responsability) Industry. Beyond AI, the changes should be analyzed with the perspective of a sustainable future building. Our research should be as much contributive as possible.
- Jan 3, 2020
I need help plz, anybody can help me to find a good master thesis topic in the field of computer science/IT.
The topic related to improving wireless network performance/security using one of Artificial intelligence techniques.
any help would be appreciated
thank you in advance
Read the documents produced by ITU-T-FG-NET2030 (Networks by 2030). Several use cases refer to AI for the management of advanced network features.
Another source is ETSI-ISG-NFV (network function virtualisation) where AI might help manage complexity beyond human (control layer).
- Jan 1, 2020
There are a lot of problems in medicine that needs the newest technologies in computer science fields like Machine Learning and Deep Learning to solve them.
Can anyone mention some of these problems that are unsolved till now?
Let's start with a field that I know the best: biosignals processing, clarification, and prediction. There are many unresolved problems like
* Prediction of arrhythmias
* Prediction of epileptic seizures
* Reconstruction of physiological interdependencies of various physiological processes using biosignals.
* Reconstruction of physiological paths in the brain from biosignals.
* Assessing the health condition of people using biosignals.
We can do the following research
* Data mining for disease-genome dependencies from available databases.
* Prediction of disease spreads using internet searches.
* Assessment of the population health using internet searches.
* Cross AI methods with deep knowledge of complex systems theory and apply it to medicine -- this is my area of research.
* Study and predict drug interactions and side effects in patients. This will save a lot of unnecessary suffering in those using medical drugs.
* The above can be supported by an active search through all available data for possible, future drug interactions prior to their application to patients.
* Such research can help medical doctors to avoid deadly or highly damaging drug interactions. Each patient reacts differently to the same drugs! We need to know why and especially when it happens!
* Start development of advanced AI methods tailored towards the needs of bio-medicine.
All the above depends on how reliable databases of biosignals, medical records, bio-imaging, laboratory results, and many other database build.
When you want to have successful research in the field of AI, perfect databases that are open-access are a must. We have an extreme shortage of those databases. You can build a very successful carrier by building such a database(s). :-)
This is just a short list of all possibilities. :-)
- Jan 2, 2019
Dear Friends and Colleagues from RG,
I wish You all the best in the New Year. I wish you a successful continuation and successes in scientific work, achieving interesting results of scientific research in the New Year 2019 and I also wish you good luck in your personal life, all the best.
In the New Year, I wish You success in personal and professional life, fulfillment of plans and dreams, including successes in scientific work, All Good.
In the ending year, we often ask ourselves:
Have we successfully implemented our research plans in the ending year? We usually answer this question that a lot has been achieved, that some of the plans a year ago have been realized, but not all goals have been achieved.
I wish You that the Next Year would be much better than the previous ones, that each of us would also achieve at least some of the planned most important goals to be achieved in personal, professional and scientific life.
I wish You dreams come true regarding the implementation of interesting research, I wish You fantastic results of research and effective development of scientific cooperation.
I wish You effective development of scientific cooperation, including international scientific cooperation, implementation of interesting research projects within international research teams and that the results of scientific research are appreciated, I wish You awards and prizes for achievements in scientific work.
I wish You many successes in scientific work, in didactic work and in other areas of your activity in the New Year, and I also wish you health, peace, problem solving, prosperity in your personal life, all the best.
Thank you very much.
I wish you the best in New Year 2019.
Happy New Year 2020.
thank you, all the best to come
- Dec 30, 2019
I'm looking forward to getting some suggestions for writing a research proposal about car plate detection using DL. Looking for the following:-
1- Writing Style
2- Topics for Ph.D. research proposal in Deep Learning.
3- How to engage the attention of the reader in my research proposal?
4- Examples of research proposals using deep learning techniques!!
License plate reading is a very well solved problem in computer vision and probably not worth a good PhD research project. You should at least try to find a problem that is not already solved by commercial libraries or software and that would require at least some small theoretical innovation/discovery.
My suggestion, look for some recent "state of the art" review papers on deep learning application to find:
1) An application domain that has a great potential future;
2) Has lots of unsolved problems;
3) That is of interest to you.
The secret of a great PhD research subject is at the intersection of these three sets!
- Dec 9, 2019
I need lists of top journals in computing that are free or inexpensive.
Well, there are some journals indexed in Scopus and web of science that are free but in the real sense nothing is actually free.
Firstly, free journals take a very long time to publish. When i first publish in a free inderscience journal, the review period was 13months.
Secondly, after publication, you are placed on an embargo not to share the journal for some months.
These are the difficulties with approaching free high indexed journals. However, besides all these limitations, one can still get funding for high impact journals that are not free.
- Dec 29, 2019
AI has been an interesting topic in other fields especially in Instrumentation Engineering and Computer Science. Can we, Geotechnical Engineer, use AI in our field? If yes, then, how we can use it?
- Dec 20, 2019
Journal, Magazines and Letters publish scientific articles. What is technical difference between these articles and their recognition?
Writing Style, technical soundness, number of words etc.
A magazine is definitely not the same as a Journal. Magazines normally carry popular articles, not original work. A Scientific Journal publishes original scientific work not published elsewhere. 'Letters' or 'Notes' are of shorter length. It may include a comment on an already published recent paper by the author or someone else. Normally, Editor takes decisions on 'Letters' and 'Notes'. Seldom are they are sent to reviewers.They get published quickly. A full length paper in a reputed Journal has to undergo peer review process and may take up to an year or so to get published.
- Dec 28, 2018
Is International Journal of Advanced Computer Science and Applications IJACSA indexed in Scopus?
Does Journal have a sjr?
It is in the master list and Scopus now as well but has no impact factor.
- Dec 13, 2019
Mathematics is crucial in many fields.
What are the latest trends in Maths?
Which recent topics and advances in Maths? Why are they important?
Please share your valuable knowledge and expertise.
For me, as well as for majority of other researchers, Mathematics is the language of Science!
- Dec 14, 2019
surfing the web I wasn't able to find a simple and effective answer to my question. In other words, I need to speed up a code I wrote using several packages such as raster, rgdal, biomod2 and ncd4 on GIS data: do I need to wait somebody else (or me too) to develop a NEW ratser etc. package which forces R to use GCPU or is it already possible siply loading additional packages before my code run such as gpuR, parallel or what else?
All my bests
Although I find your question really interesting and important and hope that somebody will give a more positive answer to it, I'm afraid it's not possible to let all packages use multiple CPU cores and GPU as well with a simple package loading. A workaround instead of solution: Microsoft R Open provides built-in multithreaded matrix algebra which is used by a lot of packages. You may find that this software is a useful alternative to R.
- Dec 13, 2019
I want to ask what could the scope of IOT in agriculture or smart agriculture if I want to start a research in this field?
Also i need some suggestion on what could be the area of research in this field for a person from computer science field?
As a person from computer science field, you are more focused on the technological aspects of the project.
usually such projects starts with defining a problem (monitoring dryness of soil, monitoring ripeness of fruits, monitoring specific diseases of the plants, monitoring cattles or sheep, ...etc.), then you deploy the appropriate sensors for the selected application, collecting data from these sensors, analyse the data with different ML/AI tools to give reports or predictions about the required value of the project to the farmers.
This is only one example and you can find more examples here:
- Dec 10, 2019
someone knows information on Summer Student Programme in CERN?
for example your Experience , condition there, Is it desirable or not, Do you think I 'll sign up or not and Any other information.
I am materials engineering.
- Dec 5, 2019
I am doing MS in computer science my sub field is networking, i have interest to do research in IOT that's why I need guide and help for selection of topic for my research. Please anyone enlist best topics according to my interest. Thank you in anticipation.
1)IoT and Data: data fusion from multiple sensors, and extraction of behaviours/patterns
2) IoT and people
People can be beneficiaries or victims (my neighbours automatic vacuum cleaner, etc)
3) IoT and AI
AI supporting IoT (data analysis, decision, action)
IoT supporting AI: IoT google cars gather data, serving as input for Google map AI
- Nov 23, 2019
Can anyone answer this question that has been perplexed me for years? What kind of scientific discipline blatantly violates basic principles or proven rules of scientific method? What kind of scientist fiercely defends such blatant violation of basic principles or proven rules of scientific method?
Last time a scientific discipline that blatantly violated scientific method was before 17th century, when researchers fiercely defended geocentric paradox in violation of scientific method. In their defense, most of the basic rules and principles of scientific method were not yet known or properly established. Most of the basic rules and principles of scientific method were formed and which have been perfected since 17th century by many great philosophers of science and brilliant scientists, particularly based on valuable lessons and insights learned from the painful experiences gained from subverting geocentric paradox, which transformed the basic science from fake science into a real science.
What is a scientific discipline? A discipline can be a scientific discipline, if and only if the BoK (Body of Knowledge) in all the published textbooks and accepted research publications for the discipline must have been acquired and accumulated without violating basic principles and rules of scientific method. The purpose of modern scientific method is perfecting the quality of knowledge by finding and eliminating imperfections and/or anomalies. Scientific method doesn’t offer a recipe, hints, and guidelines or impose restrictions for doing research to acquire new knowledge, but provide tools to keep scientific research in the right path by detecting mistakes that can divert research efforts into a wrong path.
Each piece of knowledge in the BoK must be supported by falsifiable proof (backed by evidence and facts), where each piece of knowledge and its proof is open for challenge and perfected by rigorous testing and empirical validation. The research community in 17th blatantly violated basic scientific rule, when they tried to suppress and tacitly sabotage efforts to expose 2300-year-old unproven flawed presumption (i.e. the Earth is at the center) in it’s vary foundation.
Except computer science, I could not find any evidence that any other scientific discipline violated scientific method so blatantly. It is beyond my comprehension, why researchers of computer science fiercely defending such blatant violation of basic principles or proven rules of scientific method.
Unfortunately, software researchers acquired so much invalid BoK by blatantly violating scientific method. Since it is impossible to solve any problem by relying on invalid knowledge, software researchers concluded that it is impossible to solve certain problems (e.g. real-CBD/CBE or real computer intelligence). But it is not hard to solve those problems by acquired relevant valid knowledge. Please refer to ValidKnowledge.pdf for more information.
P.S: I also failed to find a real scientist, who can understand code of conduct for real scientists: CodeOfConduct.pdf
Dear Raju and Shadi,
Excellent idea that to enter more in depth in these subjects, studying your work Raju, and provide Shadi team with a first article suite for SDE 2020, so it can introduce to the book with an extensive work. I will work on this in a few days and come back to you.
PS one article gives an idea of this work going from a political point of view to a metaphysical one (may be efficiently translated from french by automatic translators): https://une-vraie-politique-pour-notre-pays.net/2019/01/04/limposture-intellectuelle-face-cachee-dun-desastre-clef-dune-reussite-a-venir/
- Nov 17, 2019
Hi, I'm working on sequence alignment algorithms. My background is Computer Science. Given two sequences, what could be the max length of a gap and how many insertions/deletion at one stretch I may consider? I think more than one insertion/deletion at one stretch is useless...? My algorithm will accept a large text file, and report locations in the file where the best regions are found.
When I executed these two strings on ebi.ac.uk, I got the following result, 7 pairs match.
EMBOSS_001 A T C G A C T A A C C A ---- EMBOSS_001 - T C A G C T - T C C A G C T A
However, my algorithm reports an 8 pairs match, which one is better? Please suggest. Many Thanks in advance...
Query A T C G A C T A A C C A File: - T C A G C T T C C A G C T A
With classical biological sequence alignment algorithms such as blast, psiblast and fastA, the number of gaps inserted is controlled by two parameters, by the gap insertion penalty and gap extension penalty, which contribute to the alignment score. A high gap insertion penalty and low gap extension penalty gives preference to few large gaps rather then many short gaps.
- Sep 17, 2019
Distinguished Professor and Researchers
Computing Science, Communication, Security and Priacy
Call for being a panel member, Technical Program Committee - International Conference on Computing Science, Communication and Security (COMS2-2020) to be held at Ganpat University, India during 26th ?– 27th ?March 2020.
Conference web site http://coms2.gnu.ac.in/index.php
As it is a very important and essential to establish?academic and research credibility of the conference by involving domain expert researchers, professor like you and others from across the world, attract good number of high?quality research papers and also to build a public image of the conference.
Interested Professor, Researchers in area of Computing Science, Communication and Security Privacy may please send me your Name, Designations, Affiliations University, Institute , Industry Name, your email id at
Dr. Nirbhay Chaubey
Dean, Computer Science, Ganpat University, India
Please share the Conference website link further details
- Nov 21, 2019
Being a computer science student, I don't know much about statistical testing. However, recently, a lot of work has reported statistical validation of their result. In machine learning-based prediction of effector proteins, how do you apply statistical tests to validate the result?
For statistical learning methods like regression and discriminant anaysis, it is easy to validate results by checking underlying assumptions (white noise in residuals, normality., et.c.). For machine learning methods which are assumption free, you can use validation methods as @drsharma mentioned (for a great introductory book see: Introduction to Statistical Learning, Springer). If you want to compare and validate performance of several prediction models, you can use pairwise comparison tests.
- Jul 15, 2011
are any one of you doing research on sat implementation in cryptanalysis?
if so please help me....
Sat Solvers likely will not work well for the purpose of trying to mine Bitcoin. That said it can have many other useful purposes in a blockchain context.
- Oct 28, 2019
I completed a study in which participants experienced two developed interaction conditions and completed the same questionnaire post each condition. I am trying to analyse the data to look for correlations between question answers and whether the condition had any significant effect on answers.
I have performed a Wilcoxon signed rank test which showed no statistically significant difference in answers between the two conditions. However, I have also performed a spearman correlation test which showed some statistically significant differences. This contradiction has me a bit confused.
I have been doing a lot of reading online to work out the tests that i could use. If anyone could help shed some light on this it would be most appreciated.
There seems to be a lot of debate between subject areas so to clarify things, my research is in the field of computer science.
- Oct 25, 2019
How Machine Learning can be helpful to developing production of new materials?
In my opinion it is so useful in materials engineering.
Dear Hossein Homayoun ,
Machine learning provides a new means of screening novel materials with good performance, developingquantitative structure-activity relationships (QSARs) and other models, predicting the properties ofmaterials, discovering new materials and performing other materials-relateds studies.
- Oct 21, 2019
It is my understanding that machine learning approaches perform best for predicting secondary structures in proteins ( having prediction accuracy of up to 80% ). However, protein structure prediction with ML relies on finding homologous regions established from previously determined structures. So, it won't work for proteins for which no known homologues exist. My background is computer science and, not being from the field of biochemistry, I wonder whether non-ML methods like improved Chou-Fasman and GOR are still being worked on.
This paper details about it.
- Oct 14, 2019
My field of study: Computer Science
Topic of Interest: Cyber Security, machine learning and virtualization. But I am fine with other are too
Thank you in advance
"A solution to detect a malware at packet level and directing it to a honeypot in order to further analyze and find the attack vector or the attack source" would be a great use of machine learning and Cyber Security.
- Sep 27, 2019
I would like to announce that we have started a Special Issue on Artificial Intelligence and Blockchain in the IJIMAI journal. Publication in IJIMAI is peer reviewed, open access and free of charge.
Additionally, is was recently announced that IJIMAI is indexed in Science Citation Index Expanded (Clarivate Analytics) beginning with vol. 4(3) March 2017. The journal will be listed in the 2019 Journal Citation Reports with a 2019 Journal Impact Factor when released in June 2020.
If you are working on interesting Blockchain and AI synergies, I would like to invite you to contribute to this SI.
Please, find all the info in the SI dossier:
Research Proposal Special Issue Artificial Intelligence and Blockchain (Intern...
Shall you contribute a paper, please submit it through email to either editor.
Dear Arjun R ,
It is advisable that the paper is related to some cross-fertilization of AI and Blockchain, otherwise it risks being rejected for being out of scope.
That said, if the work involves Blockchain to some extent even if it is not core to the proposal, I still suggest that you submit the paper for consideration.
- Sep 20, 2019
During the research process in computer sciences, we need a set of tools such as:
- IDE for algorithms coding
- LaTex editor for writing papers
- Statistical toolkit for experiments
- Some CAD tools for design
From your point of view, which tools should I use for research in computer sciences? Could you please, provide examples of these tools?
IDE would be an overview of multiple tools. Off hand, I can think of:
a. Choose an optimized compiler. Its important to make sure you have the correct flags;
b. The editor should be helpful, but must not get in your way. I like vi and emacs.
c. Use of a good debugger is highly recommended.
d. Use of memory checking tools such as Valgrind is recommended.
e. A good profiler will inform you about bottlenecks in your code.
f. I found that a paper based log helps a lot. You may wish to have your log in tex if easier.
For documenting, you can use MS Word, but of course TeX is free and very versatile.
Often, design comes from some diagrams or imaging data. You need a good image segmentation tool, which should also mesh the model for you.
- Sep 11, 2019
Is the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) applicable for research in the Computer science field? I noticed that often this method of literature research is used only on medical studies.
If not... What do you suggest as a method for a systematic and structured literature review?
I appreciate your contribution.
Maybe few of these articles can help :
In addition, I want to emphasize that I have allready wrote 2 papers regarding systematic literature reivew using PRISMA framework in engineering and waiting for reply from the reviewers. There are currently exponential number of papers in Software Engineering and I bet few of them are using PRISMA framework. Try searching scholar with boolean search like: "software engineering" AND "PRISMA" AND .....
You can add up the statements to get more specific search strings that are close to your research area.
POINTER: Don't get to attached to PRISMA strict methodology, you can shape the flow diagram of your systematic review as you wish, but pay attention to Inclusion and Exclusion criteria, because the extraction and including paper in your corpus can be vital for systematic assessment, hence, it can be prone to bias.
Hope I was somewhat helpful.
- Aug 16, 2019
- The majority of university professors are from their specific field where they were trained
- It is likely that an expert professor of his discipline or professional specialty will allow him to transmit the specific knowledge to his students; but nothing ensures that in reality this happens
- At present, university professors are hired not only for having a bachelor's degree or a degree; but also, for its competence in telecommunications and computer science, for being an indispensable tool in all human knowledge and doing activities
- University professors, experts in their discipline and competent in communication and computer science, are more likely to have a successful teaching practice
- Training in Pedagogy or Education Sciences, are already indispensable in university professors to be better teachers, for the management in methodology and instruments to improve the significant learning of their students
Well, what I believe, it is may be depending on the situations and level of educational institutions as well as grades of student. And certainly professor have to be experienced in their own discipline.
- Sep 14, 2015
- suppose that i wrote a code on MATLAB to track objects
- i hit RUN and it took x milliseconds to be implemented on an ibm machine with certain specs say x gigahertz processing speed , y Gbytes RAM...etc
- but i want to know how much time it takes to run on any machine??
time() returns the total CPU time in milliseconds that was spent by the current MuPAD? process.