What to do for reaching the best version of yourself ?


Essential Steps to Become the Best Version of Yourself

SEO Meta Description

Discover the transformative journey to becoming the best version of yourself with practical steps on self-improvement, emotional intelligence, and lifelong learning. Start your path to personal growth today.

Understanding Personal Growth

The journey to becoming the best version of yourself is both exhilarating and challenging. It involves diving deep into your inner world, understanding your desires, and overcoming the fears that hold you back. It’s about pushing boundaries, learning continuously, and embracing change.

Self-Assessment: The Foundation of Growth

Self-assessment is your first step. By identifying your strengths and acknowledging your weaknesses, you set the stage for meaningful improvement. It’s about being honest with yourself and setting clear, achievable goals that guide your path forward.

Mindset Mastery: Cultivating a Growth Mindset

Adopting a growth mindset is pivotal. It means seeing challenges as opportunities, failures as lessons, and constantly seeking ways to better yourself. This mindset fosters resilience, encouraging you to persevere through setbacks and keep striving for your goals.

Physical Well-being: A Pillar of Personal Development

Your physical health is a cornerstone of personal development. Regular exercise, balanced nutrition, and adequate rest not only improve your body’s function but also enhance your mental clarity and emotional stability. It’s about respecting your body and giving it the care it deserves.

Emotional Intelligence: Navigating Life with Awareness

Developing emotional intelligence is crucial. It enables you to understand and manage your emotions, build stronger relationships, and navigate the complexities of social interactions with empathy and insight. It’s about being in tune with yourself and those around you.

Skill Development: Lifelong Learning

The pursuit of knowledge and skills is never-ending. Whether it’s enhancing your current abilities or learning something entirely new, continuous education keeps you relevant and adaptable. It’s about embracing the journey of learning as a lifelong adventure.

Time Management: Maximizing Productivity

Effective time management transforms how you work and live. By prioritizing tasks, setting realistic deadlines, and eliminating distractions, you can achieve more with less stress. It’s about making the most of your time to focus on what truly matters.

Financial Literacy: Securing Your Future

Understanding finances is key to securing your future. From budgeting and saving to investing wisely, financial literacy empowers you to make informed decisions, ensuring a stable and prosperous life. It’s about taking control of your financial destiny.

Social Connections: The Role of Relationships

Building and maintaining healthy relationships are vital to personal growth. These connections provide support, offer new perspectives, and open doors to opportunities. It’s about nurturing bonds that enrich your life and the lives of others.

Mental Health: The Core of Personal Fulfillment

Taking care of your mental health is non-negotiable. Whether it’s managing stress, seeking professional help when needed, or practicing mindfulness, prioritizing your mental well-being is essential for a balanced and fulfilling life. It’s about valuing your inner peace as much as your external achievements.

Spirituality and Inner Peace

Finding spirituality or a sense of inner peace is a deeply personal journey. It can provide a grounding force, offer comfort during tough times, and help you understand your place in the world. It’s about connecting with something greater than yourself, whatever that may be for you.

Creativity and Innovation: Unleashing Potential

Creativity isn’t just for artists; it’s a crucial skill for problem-solving and innovation. Encouraging creative thinking in all areas of your life can lead to unexpected solutions and new possibilities. It’s about seeing the world through a lens of curiosity and openness.

Leadership and Influence: Impacting Others

Developing leadership skills isn’t just for those in managerial positions. It’s about influencing others positively, whether in your personal life, community, or workplace. Good leadership is ethical, empathetic, and transformational. It’s about being the change you wish to see.

Habits for Success: Building a Better You

Your daily habits form the foundation of your life. Cultivating positive habits and shedding negative ones can significantly impact your well-being and success. It’s about making small, consistent changes that add up to big transformations.

Adventure and Experience: Growing Beyond Comfort Zones

Stepping out of your comfort zone is where growth happens. Whether it’s traveling to a new country, trying a new activity, or simply changing your routine, new experiences challenge you and expand your perspective. It’s about embracing the unknown with open arms.

Contribution and Community Service

Giving back to your community is both rewarding and enriching. Volunteering your time or resources can make a significant difference in the lives of others and provide a sense of fulfillment and connection.

FAQs

How do I start my journey to personal growth? Begin with self-reflection to understand your current position and where you want to be. Set small, achievable goals to start moving in the right direction.

Can personal growth happen at any age? Absolutely. Personal growth is a lifelong process that knows no age limits. It’s never too late to start.

How do I measure my progress? Set clear, measurable goals and regularly review them. Celebrate your successes, and learn from your setbacks.

What do I do if I feel stuck? Seek new experiences, learn new skills, or consider working with a mentor or coach. Change your routine to spark creativity and find new inspiration.

How important is a support system? A strong support system is invaluable. Surround yourself with people who encourage and uplift you.

How can I maintain motivation? Keep your goals visible, celebrate progress, and remember why you started. Find a community or group with similar aims for mutual support.

Conclusion

Becoming the best version of yourself is a journey filled with challenges, learning, and growth. It requires dedication, patience, and resilience but rewards you with a fulfilling life rich in experiences, relationships, and achievements. Remember, personal growth is not a destination but a continuous process of becoming who you wish to be. Embrace each step with openness and optimism, and never stop striving for improvement.

Zur Hate-Speech Erkennung in sozialen Medien

  1. Introduction
    • The Rising Challenge of Online Hate Speech
    • The Importance of Automatic Hate Speech Recognition
  2. Understanding Hate Speech
    • Definition and Types of Hate Speech
    • Legal and Ethical Considerations
  3. Technological Advances in Hate Speech Recognition
    • Machine Learning Models and Algorithms
    • Natural Language Processing (NLP) Techniques
  4. Data Collection and Annotation
    • Sources of Data for Training Models
    • Challenges in Annotating Hate Speech
  5. Model Training and Deployment
    • Supervised vs. Unsupervised Learning
    • Real-time Monitoring and Response Systems
  6. Accuracy and Reliability
    • Measures of Success
    • Dealing with False Positives and Negatives
  7. Ethical Implications and Bias Mitigation
    • Avoiding Bias in AI Models
    • Ethical Considerations in Automated Monitoring
  8. Case Studies: Successful Implementations
    • Examples of Effective Hate Speech Recognition Systems
    • Lessons Learned and Best Practices
  9. Limitations and Challenges
    • Technical Limitations of Current Technologies
    • Legal and Privacy Concerns
  10. The Role of Human Oversight
    • Integrating Human Judgment with AI
    • The Importance of Contextual Understanding
  11. Future Directions in Hate Speech Recognition
    • Emerging Technologies and Approaches
    • The Role of AI in Shaping Online Discourse
  12. Global Perspectives on Hate Speech Regulation
    • Comparing Approaches Across Different Jurisdictions
    • International Cooperation and Standards
  13. Community Engagement and Education
    • Raising Awareness and Promoting Digital Literacy
    • The Role of Social Media Platforms and Users
  14. Tools and Resources for Researchers and Practitioners
    • Open-Source Libraries and Datasets
    • Forums and Communities for Sharing Knowledge
  15. Policy Recommendations and Best Practices
    • Guidelines for Governments and Organizations
    • Building Resilient Online Communities
  16. FAQs
    • How does automatic hate speech recognition work?
    • What are the main challenges in detecting hate speech automatically?
    • How can bias be minimized in hate speech detection algorithms?
    • What is the future of hate speech recognition technology?
    • How can individuals contribute to reducing hate speech online?
    • What are the ethical considerations of automatic hate speech detection?
  17. Conclusion
    • Summarizing the State of Automatic Hate Speech Recognition
    • The Path Forward for Safer Online Spaces

Harnessing Technology: The Future of Automatic Hate Speech Recognition

SEO Meta Description

Explore the cutting-edge advancements in automatic hate speech recognition, including AI and machine learning technologies, ethical considerations, and the future of combating online hate speech.

The Rising Challenge of Online Hate Speech

In today’s digital age, the spread of hate speech online has emerged as a significant challenge, necessitating the development of advanced automatic hate speech recognition technologies. These technologies aim to identify and mitigate the impact of harmful content, ensuring safer online environments for users worldwide.

Understanding Hate Speech

Hate speech encompasses a range of content that can incite violence, discrimination, or hostility against individuals or groups based on race, religion, gender, or other identifiers. The complexity of identifying hate speech lies in its nuanced and context-dependent nature, challenging technologists and policymakers alike.

Technological Advances in Hate Speech Recognition

Advances in machine learning and natural language processing (NLP) have propelled the development of sophisticated models capable of analyzing and recognizing hate speech. These technologies leverage vast datasets to understand the subtleties of language, including slang, idioms, and coded language used to disguise hate speech.

Data Collection and Annotation

Critical to the success of automatic recognition systems is the collection and accurate annotation of data. This involves sourcing diverse and representative datasets and meticulously labeling content to train models effectively, a process that requires a nuanced understanding of language and context.

Model Training and Deployment

Training models involves choosing the right algorithms and approaches, such as supervised learning, where models learn from labeled examples, or unsupervised learning, which identifies patterns without explicit labeling. Deploying these models in real-time environments poses additional challenges, including ensuring their adaptability and scalability.

Accuracy and Reliability

Evaluating the success of hate speech recognition systems involves measuring their accuracy and reliability, including their ability to minimize false positives and negatives. This balance is crucial to prevent the unjust censorship of content while ensuring harmful speech is effectively identified.

Ethical Implications and Bias Mitigation

The development and implementation of automatic recognition systems raise ethical questions, particularly regarding bias. Ensuring these technologies do not perpetuate or exacerbate discrimination requires ongoing efforts to identify and mitigate biases within AI models.

Case Studies: Successful Implementations

Several platforms and organizations have successfully implemented hate speech recognition systems, offering valuable insights into best practices and strategies for effective detection and moderation. These case studies highlight the potential of technology to combat online hate speech, albeit within the constraints of current capabilities and ethical considerations.

Limitations and Challenges

Despite technological advancements, automatic hate speech recognition faces limitations, including the inherent complexity of language, the dynamic nature of online discourse, and broader legal and privacy concerns. These challenges underscore the need for continuous research and development in the field.

The Role of Human Oversight

Integrating human judgment with AI is essential for addressing the limitations of automatic systems. Human moderators play a crucial role in interpreting context, nuances, and cultural references, ensuring a balanced and nuanced approach to hate speech detection.

Future Directions in Hate Speech Recognition

The future of hate speech recognition lies in the development of more sophisticated AI models, the exploration of new methodologies, and the fostering of international collaboration. Emerging technologies promise to enhance the accuracy and efficiency of detection systems, shaping the future of online discourse.

Global Perspectives on Hate Speech Regulation

The regulation of hate speech varies significantly across jurisdictions, reflecting diverse legal, cultural, and ethical standards. Understanding these global perspectives is crucial for developing technologies and policies that respect freedom of expression while protecting individuals from harm.

Community Engagement and Education

Combating hate speech online requires not only technological solutions but also community engagement and education. Promoting digital literacy and fostering a culture of respect and empathy among online users are essential components of a comprehensive strategy to reduce hate speech.

Tools and Resources for Researchers and Practitioners

The field of hate speech recognition offers a wealth of tools, resources, and communities for researchers and practitioners. Open-source libraries, datasets, and forums facilitate the sharing of knowledge and best practices, driving innovation and collaboration in the fight against online hate speech.

Policy Recommendations and Best Practices

Developing effective policies and best practices for automatic hate speech recognition involves balancing the need for safety and the protection of free speech. Recommendations for governments, organizations, and platforms focus on ethical considerations, transparency, and the importance of fostering inclusive online environments.

FAQs

How does automatic hate speech recognition work?

Automatic hate speech recognition utilizes AI, including machine learning and natural language processing, to analyze and identify potentially harmful content based on patterns, keywords, and context.

What are the main challenges in detecting hate speech automatically?

Challenges include the nuanced nature of language, the dynamic evolution of online discourse, and

How can bias be minimized in hate speech detection algorithms?

Minimizing bias involves diverse and representative data collection, continuous monitoring and updating of models, and integrating human oversight to address the limitations of AI.

What is the future of hate speech recognition technology?

The future involves more sophisticated AI models, innovative approaches to detection, and greater international cooperation to create safer online spaces.

How can individuals contribute to reducing hate speech online?

Individuals can contribute by promoting positive discourse, reporting hate speech, and supporting efforts to educate and raise awareness about the impact of harmful online behavior.

What are the ethical considerations of automatic hate speech detection?

Ethical considerations include ensuring fairness, preventing bias, and balancing the detection of hate speech with the protection of free speech and privacy rights.

Conclusion

The state of automatic hate speech recognition is a testament to the potential of technology to make online spaces safer and more inclusive. Despite the challenges and limitations, ongoing advancements in AI and machine learning offer hope for more effective detection and prevention of hate speech. As we look to the future, the collaboration between technologists, policymakers, and communities will be key to harnessing these technologies for the greater good, ensuring that the digital world remains a place for free, respectful, and constructive discourse.

Understanding Hate Speech Detection

The detection of hate speech in social media is a complex phenomenon, engaging scholars from various domains including Natural Language Processing (NLP), machine learning, and social sciences. The goal is to develop systems capable of identifying and categorizing content that promotes hate or violence against groups or individuals based on attributes such as race, religion, gender, or nationality.

Key Findings and Approaches

  1. Resource Development and Benchmarking: A systematic review by Poletto et al. (2020) emphasizes the importance of annotated corpora and benchmarks in hate speech detection, noting the diversity in language coverage and topical focus of available resources. The study calls for enhanced development methodologies to address existing gaps and improve detection systems (Poletto et al., 2020).
  2. Text Mining Techniques: Research by Rini et al. (2020) on utilizing text mining for hate speech detection highlights the wide variety of methods and features employed. The findings suggest that no single approach guarantees superior detection performance, underscoring the influence of data sources, feature selection, and class definitions on outcomes (Rini et al., 2020).
  3. Abusive Content Detection: Alrashidi et al. (2022) review abusive content detection, proposing a new taxonomy to cover different aspects of the automatic detection process. This comprehensive approach provides insights into challenges and opportunities for future research in abusive content detection in social media (Alrashidi et al., 2022).
  4. Twitter as a Research Focus: Mansur et al. (2023) conducted a systematic review specifically on Twitter hate speech detection, identifying a lack of a perfect solution and presenting research opportunities to enhance detection systems. This study underscores the ongoing need for innovative approaches to address hate speech on specific platforms (Mansur et al., 2023).

Challenges and Future Directions

  • Data Quality and Availability: The quality and representativeness of datasets used for training and testing detection systems are critical. There is a need for more diverse, balanced, and annotated datasets that accurately reflect the nuances of hate speech across different languages and cultures.
  • Methodological Diversity: While machine learning and NLP techniques have shown promise, there’s an ongoing exploration of innovative methodologies, including deep learning and transfer learning, to improve detection accuracy and reduce false positives.
  • Ethical Considerations: The detection and moderation of hate speech raise ethical concerns, including the potential for censorship and the impact on freedom of expression. Developing transparent, accountable, and fair systems is essential.
  • Interdisciplinary Collaboration: Addressing hate speech effectively requires collaboration across disciplines, including computer science, linguistics, psychology, and law. Such collaboration can enhance understanding of the social and psychological underpinnings of hate speech, leading to more effective detection and intervention strategies.

Conclusion

The detection of hate speech in social media remains a challenging yet crucial task. While significant progress has been made, continuous effort in research, methodology development, and ethical considerations is necessary. As we move forward, the goal remains clear: to create a safer, more inclusive online environment for all users.

wissensbasierte Steuerungen in der Industrie

The advent of Industry 4.0 has brought about a seismic shift in how industries operate, with a particular emphasis on automation, data exchange, and manufacturing technologies. Central to this revolution is the concept of knowledge-based controllers, which leverage the power of artificial intelligence (AI) and machine learning (ML) to enhance decision-making processes and operational efficiency. This blog delves into the essence of knowledge-based controllers within the industrial context, highlighting key findings from systematic reviews and research studies.

The Role of Knowledge-Based Controllers in Industry 4.0

Knowledge-based controllers are systems that utilize knowledge, data, and inference mechanisms to make decisions or control processes. In the context of Industry 4.0, these controllers are pivotal for implementing smart manufacturing and automation processes. They rely on a vast array of data from sensors, machines, and operations to optimize production, reduce downtime, and enhance product quality.

Insights from Recent Research

  1. Integration with Industry 4.0 Technologies: The integration of knowledge management (KM) processes with Industry 4.0 technologies is crucial for leveraging organizational knowledge effectively. A study by Manesh et al. (2021) highlights the trends and intellectual structures of KM in Industry 4.0, underscoring the importance of creating, sharing, and applying knowledge in an interconnected and data-driven environment (Manesh et al., 2021).
  2. Software Architecture and Knowledge-Based Approaches: Li, Liang, and Avgeriou (2013) explore the application of knowledge-based approaches in software architecture, revealing how knowledge management technologies facilitate architectural evaluation and decision-making processes. This underscores the adaptability of knowledge-based systems across different applications, including software development in industrial settings (Li, Liang, & Avgeriou, 2013).
  3. Managerial Challenges and Industry 4.0: Schneider (2018) discusses the managerial challenges posed by Industry 4.0 and proposes a research agenda focused on strategy, planning, cooperation, business models, human resources, and leadership. Knowledge-based controllers are implicit in addressing these challenges by providing data-driven insights for strategic decision-making (Schneider, 2018).
  4. Knowledge Sharing in Global Software Development: Anwar, Rehman, Wang, and Hashmani (2019) emphasize the importance of knowledge sharing in global software development organizations, highlighting barriers and facilitators. Knowledge-based controllers can play a significant role in overcoming these barriers, promoting a culture of knowledge sharing and collaboration (Anwar et al., 2019).

Challenges and Future Directions

  • Data Quality and Integration: Ensuring high-quality, actionable data is integrated seamlessly into knowledge-based systems remains a challenge. Future research should focus on data normalization, validation, and real-time processing techniques.
  • Customization and Scalability: Customizing knowledge-based controllers to fit specific industrial needs while maintaining scalability is crucial. Research should explore modular, adaptable frameworks that can evolve with changing industry requirements.
  • Ethical and Security Considerations: As knowledge-based systems become more autonomous, ethical considerations and security measures must be prioritized. Future developments should incorporate robust security protocols and ethical guidelines to govern AI decision-making processes.
  • Interdisciplinary Collaboration: The development of knowledge-based controllers requires collaboration across disciplines, including AI, engineering, data science, and domain-specific knowledge. Interdisciplinary research teams can drive innovation and ensure that systems are both technically sound and practically relevant.

Conclusion

Knowledge-based controllers represent a cornerstone of the Industry 4.0 revolution, offering unparalleled opportunities for enhancing industrial operations through intelligent decision-making and process control. As the field continues to evolve, focused research and collaboration across disciplines will be vital in overcoming existing challenges and unlocking the full potential of these systems.

Anwendung von „Pipes“ in Python

Pipes sind ein sehr praktisches Mittel, wenn man in Python mehrere unabhängige Prozesse (mit der multiprocessing-Bibliothek) miteinander kommunizieren lassen will.

Das Beispiel besteht aus einem Control-Process, der in Abhängigkeit von Sensordaten irgendwas regeln oder schalten soll. Die Sensordaten werden dabei von einem anderem Prozess (getSensorData) aquiriert. In unserem Fall werden Zufallszahlen im Bereich von 0..99 erzeugt, um Sensordaten zu simulieren. Dieser Prozess wird von dem Controlprozess gestartet, dabei wird auch der Name der Verbindung übergeben. Eine Pipe ist eine Verbindung in nur eine Richtung, sie hat einen Eingang (hier child_conn) und einen“Ausgang“, hier parent_conn. Sie kann nur 2 Prozesse verbinden. Sollen die Daten an mehrere Unterprozesse verteilt werden, muss man „Queue“, ebenfalls aus der bibliothek „multiprocessing“ verwenden.

In dem vorliegenden Beispiel werden die Daten verzögert abgefragt, dies soll die Pufferung der Daten durch die Pipe demonstrieren.

Hier kommt schon mal der lauffähige Quelltext (Python 3.6)

from multiprocessing import Process, Pipe, current_process

from datetime import datetime
import time, random

dta1 = 1

def getSensorData(conn,dta1):
    print("getsensorData gerufen")
    ctr = 0
    while ctr < 10:
        timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        #print("time: ",timestamp)
        time.sleep(1.5)
        dta_1 = random.randint(0,99)
        dta_2 = random.randint(0,99)
        dta_3 = random.randint(0,99)
        conn.send([timestamp, dta_1,dta_2, dta_3])
        ctr += 1

def Control_Proc():
    p = current_process()
    print('Starting:', p.name, p.pid)
    parent_conn, child_conn = Pipe()
    ptemp = Process(name="getSensorData", target=getSensorData,\
    args=(child_conn,dta1))
    ptemp.daemon = True
    ptemp.start()

    while(True):
        while parent_conn.poll():
            timestamp, data_01,data_02, data_03 \
            = parent_conn.recv()
            print(timestamp, " data01: ",data_01, "data_02: "\
            ,data_02, "data_03: ",data_03)
            time.sleep(5)


if __name__ == '__main__':
    Control_Proc()

so sieht das Ergebnis im Entwicklungssystem aus.

Diesen Code findet Ihr auch wieder auf Github:

https://github.com/Rellin-Entwicklung/Piping-Demos/blob/master/Piping_demo.py

optische Füllstandserkennung mittels preiswerter Kamera

An einigen Stelle ist es schwierig, mittels spezieller Füllstandsensorik den Füllstand eines Mediums zu erfassen.

Hat man den freien Blick auf das Medium (z.B.) Glasbehälter, bietet sich hier der Einsatz einer preiswerten Kamera und beispielsweise eines Raspberry-Kleincomputers an.

Mittels der Open Source Bibliothek „Open CV“ (hier in der Version 3) ist ein spezielles Programm schnell erstellt.

Das folgende Bild zeigt den Versuchsaufbau: Eine Glasgefäß ist mit einem Medium gefüllt, welches sich vom Hintergrund abhebt

Im ersten wird von der Webcam ein Foto des Aufbaus aufgenommen.

Dann wird zunächst der eigentlich interessierende Bereich des Bildes ausgewählt.

Im nächsten Schritt wird der interessiende Bereich ausgeschnitten und in ein Graubild gewandelt.

Dann folgt ein Threshold-Prozess, der nur 2 Farben nämlich weiß und schwarz, übrig lässt.

Zählt man jetzt die Pixel, hat man ein Maß für den Füllstand das Problem ist gelöst….

Raspberry Pi und industrielle Benutzeroberfläche (HMI) mittels Modbus

Will man in rauer Umgebung mit einem Raspi kommunizieren, ist der Einsatz eines industriellen Touchpanels eine gute Idee. Ich setze hierfür gerne die Geräte  von  Proface ein. Die Geräte gibt es in vielen Größen und Ausführungen .Mittels eines Softwaretools (GP ProEX) lassen sich leicht und schnell ansprechende Oberflächen erstellen.  Hier will ich in einem kurzen Beispiel zeigen, wie man mir Hilfe eines ProFace-Touchpanels mit dem Raspberry Pi kommuniziert und die GPIOs des Raspis schaltet sowie den Zustand seiner Eingänge anzeigt.Schnittstellen

Schnittstellen

Die Proface Panels verfügen über eine unglaubliche Anzahl von Treibern für nahezu alle am Markt verfügbaren Steuerungen und einer Vielzahl weiterer Geräte wie z.B. Frequenzumricher (Inverter) oder auch Temperaturregler. Für unsere Anwendung bietet es sich an, den Modbus als Protokoll zum Informationsaustausch zwischen Raspi und Panel zu nutzen. Als physikalisches Übertragungsmedium soll hierbei das Ethernet genutzt werden, sowohl der Pi als auch die ProFace-Panels bieten entsprechende Schnittstellen hierfür.

Qelltext für die Modbus-Kommunikation mit den ProFace – Panels

#!/usr/bin/env python
'''
Pymodbus Server With Updating Thread
--------------------------------------------------------------------------
This is an example of having a background thread updating the
context while the server is operating. This can also be done with
a python thread::
    from threading import Thread
    thread = Thread(target=updating_writer, args=(context,))
    thread.start()
'''
#---------------------------------------------------------------------------#
# import the modbus libraries we need
#---------------------------------------------------------------------------#
from pymodbus.server.async import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from pymodbus.transaction import ModbusRtuFramer, ModbusAsciiFramer
import RPi.GPIO as GPIO

#---------------------------------------------------------------------------#
# import the twisted libraries we need
#---------------------------------------------------------------------------#
from twisted.internet.task import LoopingCall

#---------------------------------------------------------------------------#
# configure the service logging
#---------------------------------------------------------------------------#
import logging
logging.basicConfig()
log = logging.getLogger()
#log.setLevel(logging.DEBUG)


# RPi.GPIO Layout verwenden (wie Pin-Nummern)
GPIO.setmode(GPIO.BOARD)

# Pin 13 (GPIO 27) auf Output setzen
GPIO.setup(37, GPIO.OUT)
GPIO.setup(35, GPIO.OUT)
GPIO.setup(33, GPIO.OUT)
GPIO.setup(31, GPIO.OUT)
GPIO.setup(29, GPIO.OUT)

#Pins auf Input setzen
GPIO.setup(40, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
GPIO.setup(38, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
GPIO.setup(36, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
GPIO.setup(32, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
GPIO.setup(22, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)


#---------------------------------------------------------------------------#
# define your callback process
#---------------------------------------------------------------------------#
def updating_writer(a):
    ''' A worker process that runs every so often and
    updates live values of the context. It should be noted
    that there is a race condition for the update.
    :param arguments: The input arguments to the call
    '''
    log.debug("updating the context")
    context  = a[0]
    register = 1
    slave_id = 0x00
    address  = 0x00
    #Wert aus Register 000001 von Button auslesen
    values   = context[slave_id].getValues(register, address)
    if values[0] == False:
        gpio_off(37)
    elif values[0] == True:
        gpio_on(37)

    address  = 0x01
    #Wert aus Register 000002 von Button auslesen
    values   = context[slave_id].getValues(register, address)
    if values[0] == False:
        gpio_off(35)
    elif values[0] == True:
        gpio_on(35)

    address  = 0x02
    #Wert aus Register 000003 von Button auslesen
    values   = context[slave_id].getValues(register, address)
    if values[0] == False:
        gpio_off(33)
    elif values[0] == True:
        gpio_on(33)

    address  = 0x03
    #Wert aus Register 000004 von Button auslesen
    values   = context[slave_id].getValues(register, address)
    if values[0] == False:
        gpio_off(31)
    elif values[0] == True:
        gpio_on(31)

    address  = 0x04
    #Wert aus Register 000005 von Button auslesen
    values   = context[slave_id].getValues(register, address)
    if values[0] == False:
        gpio_off(29)
    elif values[0] == True:
        gpio_on(29)

    # Eingang lesen
    register = 1
    address = 0x05
    if GPIO.input(40) == GPIO.HIGH:
        values = [1]
        #setzt den Datenanzeiger auf den Wert
        context[slave_id].setValues(register, address, values)
    elif GPIO.input(40) == GPIO.LOW:
        values = [0]
        context[slave_id].setValues(register, address, values)

    # Eingang lesen
    register = 1
    address = 0x06
    if GPIO.input(38) == GPIO.HIGH:
        values = [1]
        #setzt den Datenanzeiger auf den Wert
        context[slave_id].setValues(register, address, values)
    elif GPIO.input(38) == GPIO.LOW:
        values = [0]
        context[slave_id].setValues(register, address, values)

    # Eingang lesen
    register = 1
    address = 0x07
    if GPIO.input(36) == GPIO.HIGH:
        values = [1]
        #setzt den Datenanzeiger auf den Wert
        context[slave_id].setValues(register, address, values)
    elif GPIO.input(36) == GPIO.LOW:
        values = [0]
        context[slave_id].setValues(register, address, values)

    # Eingang lesen
    register = 1
    address = 0x08
    if GPIO.input(32) == GPIO.HIGH:
        values = [1]
        #setzt den Datenanzeiger auf den Wert
        context[slave_id].setValues(register, address, values)
    elif GPIO.input(32) == GPIO.LOW:
        values = [0]
        context[slave_id].setValues(register, address, values)

    # Eingang lesen
    register = 1
    address = 0x09
    if GPIO.input(22) == GPIO.HIGH:
        values = [1]
        #setzt den Datenanzeiger auf den Wert
        context[slave_id].setValues(register, address, values)
    elif GPIO.input(22) == GPIO.LOW:
        values = [0]
        context[slave_id].setValues(register, address, values)

def request_data(a):
    log.debug("reading the context")
    context  = a[0]
    register = 2
    slave_id = 0x00
    address  = 0x00

    values = context[slave_id].getValues(register, address, count=15)
    print values

def gpio_on(pin):
    # LED immer ausmachen
    GPIO.output(pin, GPIO.LOW)

    # LED an
    GPIO.output(pin, GPIO.HIGH)

def gpio_off(pin):
    # LED aus
    GPIO.output(pin, GPIO.LOW)
#---------------------------------------------------------------------------#
# initialize your data store
#---------------------------------------------------------------------------#
store = ModbusSlaveContext(
    di = ModbusSequentialDataBlock(0, [0]*100),
    co = ModbusSequentialDataBlock(0, [0]*100),
    hr = ModbusSequentialDataBlock(0, [2]*100),
    ir = ModbusSequentialDataBlock(0, [0]*100))
context = ModbusServerContext(slaves=store, single=True)

#---------------------------------------------------------------------------#
# initialize the server information
#---------------------------------------------------------------------------#
identity = ModbusDeviceIdentification()
identity.VendorName  = 'pymodbus'

#---------------------------------------------------------------------------#
# initialize the server information
#---------------------------------------------------------------------------#
identity = ModbusDeviceIdentification()
identity.VendorName  = 'pymodbus'
identity.ProductCode = 'PM'
identity.VendorUrl   = 'http://github.com/bashwork/pymodbus/'
identity.ProductName = 'pymodbus Server'
identity.ModelName   = 'pymodbus Server'
identity.MajorMinorRevision = '1.0'

#---------------------------------------------------------------------------#
# run the server you want
#---------------------------------------------------------------------------#
time = 0.2 # 5 seconds delay
time2 = 1
print context[0]
loop = LoopingCall(f=updating_writer, a=(context,))
loop2 = LoopingCall(f=request_data, a=(context,))
loop.start(time, now=False) # initially delay by time
#loop2.start(time2, now=False) # initially delay by time
StartTcpServer(context, identity=identity, address=("192.168.10.38", 502))

Einrichten der Benutzeroberfläche mit GP Pro ex

Protokoll des ProFace Touchpanels auswählen
Modbus-Adresse des Schaltelements festlegen
Herunterladen des Projektes auf das Panel

Live-Bild einer Webcam anzeigen mit Python und OpenCV

Ist OpenCV erst einmal auf dem Test-System installiert, hat man unzählige Möglichkeiten, mit Bildern aus welchen Quellen auch immer, zu arbeiten.

Eine Grundaufgabe ist es oft, einfach das Bild einer Webcam auf dem Bildschirm darzustellen

Hierfür habe ich auf Github  https://gist.github.com/tedmiston/6060034 eine elegante Lösung gefunden, die ich Euch hiermit gerne vortstelle. (Vielen Dank an Taylor D. Edmiston)

Vielen Dank an geralt https://pixabay.com/de/tablet-technologie-vorf%C3%BChrung-1704813/  für das Bild zum vorliegenden Beitrag

das lauffähige Python 3.65 / OpenCV3.x Code-Snippet zum Anzeigen des Bildes auf dem Monitor

Interesant in diesem Zusammenhang ist die Rolle von cv2.waitkey(x). Während cv2.waitkey(0) nach jedem angezeigtem Bild auf eine Eingabe wartet, bewirkt jedes andere Argument hier eine flüssige Darstellung des Streams.

import cv2

cam = cv2.VideoCapture(0)
while True:
     ret_val, img = cam.read()
     img = cv2.flip(img, 1)
     cv2.imshow('my webcam', img)
     if cv2.waitKey(1) == 27:
          break  # esc to quit
cv2.destroyAllWindows()

Die Funktion „Webcam Lesen) zum Einbinden in eigene Projekte

"""
Simply display the contents of the webcam with optional 
mirroring using OpenCV 
via the new Pythonic cv2 interface.  Press <esc> to quit.
"""

import cv2


def show_webcam(mirror=False):
    cam = cv2.VideoCapture(0)
while True:
     ret_val, img = cam.read()
     if mirror: 
         img = cv2.flip(img, 1)
     cv2.imshow('my webcam', img)
     if cv2.waitKey(1) == 27: 
          break  # esc to quit
cv2.destroyAllWindows()


def main():
    show_webcam(mirror=True)


if __name__ == '__main__':
    main()

Webcam – Bewegungserkennung mit Python und OpenCV

Wenn man mit einer Webcam eine Szenerie beobachtet, macht es oft Sinn, zu wissen, ob sich irgend etwas in dieser Szenerie tut, also bewegt. Man kann z.B. nach Detektion einer Bewegung ein Bild abspeichern, eine Nachricht absetzen oder weitere Bildanalysen starten. Die Webcam wird also durch ein paar Zeilen Python-Code zu einem Bewegungsmelder, der nach meinen Erfahrungen den Vergleich mit einem passiv-Infrarot – Detektor (Piri, herkömmlicher Bewegungsmelder) nicht scheuen braucht.

In diesem Beitrag will ich Euch ein Code-Snippet für eine lauffähige Lösung vorstellen, die nach Detektion einer Bewegung ein Bild abspeichert. Die Empfindlichkeit der Unterscheidung, ob eine Bewegung erfolgt ist, ist dabei frei wählbar

Das unter Python 3.6 /OpenCV 3.4 lauffähige Code-Snippet für Bewegungserkennung mittels Webcam:

import cv2
import datetime

def diffImg(t0, t1, t2):
    d1 = cv2.absdiff(t2, t1)
    d2 = cv2.absdiff(t1, t0)
    return cv2.bitwise_and(d1, d2)

folder="BV"
message_01 = "not yet"
message_02= "not yet"
cam = cv2.VideoCapture(0)
cam.set(3,1280)
cam.set(4,720)

winName = "Bewegungserkennung"

font = "FONT_HERSHEY_SIMPLEX"
#cv2.namedWindow(winName, cv2.CV_WINDOW_AUTOSIZE)
# Read three images first:

return_value, image = cam.read()
t_minus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

t = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

Durchschnitt = 1
n=1


while True:

    t_minus = t

    t = t_plus

    t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)
    result_image = diffImg(t_minus, t, t_plus)
    cv2.putText(result_image, 'OpenCV', (100, 500), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), thickness=2)
    cv2.putText(result_image, message_01, (100, 550), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), thickness=2)
    cv2.putText(result_image, "Stop with <ESC> ",(100, 600), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), thickness=2)
    cv2.imshow(winName, result_image)

    print (cv2.countNonZero(diffImg(t_minus, t, t_plus)))

    if cv2.countNonZero(diffImg(t_minus, t, t_plus)) > 305000:
        return_value, image = cam.read()
        # cv2.imwrite("buero" + str(n) + ".png", image)
        cv2.imwrite("{0}/{1:%d%b%Y_%H_%M_%S.%f}.png".format(
            folder, datetime.datetime.utcnow()), image)
        message_01 = str(n) + "  pictures saved"
        n= n+1

    key = cv2.waitKey(10)

    if key == 27:
        cam.release
        cv2.destroyWindow(winName)
        break

Das Ergebnis: Differenzbild wird angezeigt, solange Bewegung stattfindet werden Fotos abgespeichert. Auch die Anzahl der gespeicherten Fotos wird mitgeteilt.

erlogreicher „Selbstversuch“ – bei Bewegung werden Bilder gespeichert.

Diese Lösung hat sich im praktischen Test (Bewegungsmelder im Büro) als recht robust erwiesen und kommt dank gleitender Hintergrund-Durchschnittsbildung auch mit wechselnden Lichtverhältnissen gut zurecht.

Der Lösungsansatz auf Basis Differenzbilder geht auf Matthias Stein (http://www.steinm.com/blog/motion-detection-webcam-python-opencv-differential-images/) zurück, ich habe einige Anpassungen und Erweiterungen bezüglich meiner Applikation vorgenommen.

Anm: Das Titelbild stammt von: https://pixabay.com/de/person-bewegung-beschleunigen-2146508/, vielen Dank an ATDSPHOTO 

Nachrichten vom Raspberry Pi erhalten, mit Python und unter Nutzung des Dienstes „pushingbox“

Mit dem Raspberry Pi ist es einfach und sehr preiswert geworden, beliebige Sachverhalte zu überwachen. Passiert dann das Ereignis, auf welches man wartet, will man schnellstmöglich darüber informiert werden, gewöhnlicherweise durch eine Nachricht per mail oder direkt  aufs Mobiltelefon. Hier kommt der kostenlose Dienst „pushingbox“ (https://www.pushingbox.com/) gerade recht, der auch diese Aufgabe sehr einfach macht und effektiv löst.

Bei „pushingbox“ gibt es nur eine einzige API, die sich verschiedenartig triggern lässt, z.B. durch:

  • Arduino
  • Raspberry Pi
  • Spark Core
  • IFTTT
  • Email
  • SmartThings
  • HTTP Request
  • Dein eigenes Script…
  • Vera, Fibaro, …

Die durch die o.g. Trigger ausgelöste Nachricht lässt sich dann wie folgt empfangen:

  • Emails
  • Tweets
  • Notifications on SmartWatch like Pebble
  • Smartphone Push Notifications (iOS, Android, WindowsPhone)
  • Windows8 Notifications
  • MacOS Notifications
  • Karotz Text-to-Speech
  • Custom HTTP Request

Um „pushingbox“ zu nutzen, muss man sich dort mit seinem „Google“-Account anmelden. Sofort danach kann man den Dienst, am einfachsten mit einem html-Aufruf, ausprobieren.

unmittelbar nach Aufruf finden sich die entsprechenden mails im Posteingang.

Dies Funktionalität lässt sich auch in Python mit wenigen Zeilen Quelltext nutzen: (Dieser Quelltext stammt von guiguiabloc, http://blog.guiguiabloc.fr/index.php/2012/02/22/pushingbox-vos-notifications-in-the-cloud/22.02.2012)

iimport urllib, urllib2
class pushingbox():
  url = ""
  def __init__(self, key):
    url = 'http://api.pushingbox.com/pushingbox'
    values = {'devid' : key}
    try:
      data = urllib.urlencode(values)
      req = urllib2.Request(url, data)
      sendrequest = urllib2.urlopen(req)
    except Exception, detail:
      print "Error ", detail

Aufruf der Klasse

from PushingBox import pushingbox
key = "v35883B72B89AFAC"
pushingbox(key)

Das Beitragsbild kommt diesmal con Comfreak, Herzlichen Dank !  https://pixabay.com/de/meer-flaschenpost-schiffbr%C3%BCchig-1377712/

Realisierung eines Farberkennungssensors mit Python und OpenCV – Teil 1

In der Praxis gibt es häufig die Aufgabe, die Farbe von Objekten  (z.B, Produkten) automatisch zu erkennen und aus der ermittelten Information irgendwelche Aktionen abzuleiten, sei es das Generieren einer Nachricht oder auch das Schalten eines Relais.

Aus diesem Grunde zeige ich heute, wie man mit sehr wenig Aufwand mittels einer Webcam, einem Rechner (kann auch ein Raspberry Pi sein), Python und OpenCV) ein System erstellt, welche die Farbe eines beliebigen Objektes (repräsentiert durch die 3 Werte des HSV-Farbraumes) digital ermittelt. Dies geschieht, in dem man ein Fadenkreuz auf den interessierenden Bereich des Bildes bewegt. Hierzu lassen sich im Programm folgende Tasten benutzen:

G- nach links

Z- nach oben

J-nach rechts

N- nach unten

Hat mein den interessierenden Farbwert (Hue) gefunden, lässt er sich mit der m -Taste (Memory) abspeichern.

Aus dem hier vorgestellten Code-Snippet lässt sich dann einfach ein Sensor entwickeln, der bei Auftauchen einer vorher festgelegten Farbe  (Farbbereich) eine Aktion auslöst, dies wird dann im Teil 2 dieses Artikel behandelt, wenn Interesse besteht.

Code-snippet Erkennung von Farbwerten in einer Region of Interest (ROI) mit Python 3.x und OpenCV

Dieses Codesnippet einfach in den Editor Deiner Wahl kopieren und mit Python ausführen. nympy und OpenCV müssen installiert sein.

# calculation  of HSV - color values

import numpy as np
import cv2

# initialisiere Webcam
cam = cv2.VideoCapture(0)

cam.set(3,1280)
cam.set(4,720)

# initiale Koordinaten für Auslesung definieren
x, y, w, h = 250, 150, 100, 100

# show stream from WebCam
while cam.isOpened():
    # lese frame von WebCam
    ret, frame = cam.read()

    # convert Frame into HSV
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

    # zeige Hue-Wert im Fadenkreuz
    cv2.putText(frame, "HSV: {0}".format(frame[y + 1, x + 1]), (x, 100),
                cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), thickness=2)
    cv2.line(frame,(x-100,y),(x+100,y),(0,0,0),2)
    cv2.line(frame, (x, y-100), (x , y+100), (0, 0, 0), 2)
    cv2.circle(frame,(x, y) ,50 , (0, 0, 0), 2)

    # show complete frame
    cv2.imshow("frame", frame)

    # wait for key
    key = cv2.waitKey(1)
    print (key)
    if key == 106:        # J-Taste
        x=x+5
    elif key == 103:      # G -Taste
        x = x - 5
    elif key == 122:      # Z-Taste
        y = y -5
    elif key == 110:      #  N-Taste
        y = y +5
    elif key == 109:      #  M-Taste (memory)
        HSV_mem = frame [(x,y)]
        H_mem = HSV_mem[0]
        # break
    # wenn ESC gedrueckt, beende Programm
    elif key == 27:
        cv2.destroyAllWindows()
        break

Ergebnisse

Nach Positionierung des Fadenkreuzes auf dem entsprechenden Modellauto werden die HSV- Werte des jeweiligen Modells angezeigt.

Originalansicht des verwendeten „Test-Aufbaus“

Erläuterungen

Das von der Camera gelieferte Bild wird zunächst in den HSV-Farbraum, konvertiert

Der HSV-Farbraum wird in OpenCV in folgenden Wertebereichen gepeichert:

  • Hue (H, Farbwert):  0 bis 180 
  • Saturation (S, Sättigung):  0 bis 255
  • Value (V, Helligkeit):  0 bis 255

Diesr Wertebereich unterscheidet sich von vielen anderen Programmen und Bibliotheken (wie z.B. Gimp), also Vorsicht, das ist die Ursache für einige schwer zu findende Fehler. 

Das Bild liegt jetzt in Form einer Matrix vor, die alle gewünschten Informationen bereitstellt.     Über ein Fadenkreuz, welches sich über das eingelesen Bild mittels der Tasten auf der Tastatur bewegen läßt, wird dem Programm die gewünscht Position für die Farbwertermittlung mitgeteilt. Die HSV-Werte werden jetzt ermittelt und auf dem Bild angezeigt. Zur späteren Weiterverarbeitung läßt sich aktuelle Farbwert durch die „m“-Taste in einer Variable speichern.

Dieser Artikel zeigt natürlich nur de generelle Vorgehensweise. Unter industriellen Bedingungen müsste man sich als erstes um konstante Beleuchtungsbedingen kümmern, um verlässliche Aussagen zu erhalten.