Author name: Vikram Chiluka

Python hashlib.shake_256() Function

Python hashlib Module:

To generate a message digest or secure hash from the source message, we can utilize the Python hashlib library.

The hashlib module is required to generate a secure hash message in Python.

The hashlib hashing function in Python takes a variable length of bytes and converts it to a fixed-length sequence. This function only works in one direction. This means that when you hash a message, you obtain a fixed-length sequence. However, those fixed-length sequences do not allow you to obtain the original message.

A hash algorithm is considered better in cryptography if the original message cannot be decoded from the hash message. Changing one byte in the original message also has a big impact(change) on the message digest value.

Python secure hash values are used to store encrypted passwords. So that even the application’s owner does not have access to the user’s password, passwords are matched when the user enters the password again, and the hash value is calculated and compared to the stored value.

Hashing Algorithms That Are Available:

  • The algorithms_available function returns a list of all the algorithms available in the system, including those accessible via OpenSSl. Duplicate algorithm names can also be found.
  • The algorithms in the module can be viewed by using the algorithms_guaranteed function.
import hashlib
# Printing list of all the algorithms
print(hashlib.algorithms_available)
# Viewing algorithms
print(hashlib.algorithms_guaranteed)

Output:

{'sha384', 'blake2s', 'sha3_384', 'sha224', 'md5', 'shake_256', 'blake2b', 'sha3_512', 'sha1', 'shake_128', 'sha512', 'sha3_256', 'sha256', 'sha3_224'}
{'sha384', 'blake2s', 'sha3_384', 'sha224', 'md5', 'shake_256', 'blake2b', 'sha3_512', 'sha1', 'shake_128', 'sha512', 'sha3_256', 'sha256', 'sha3_224'}

Functions:

You only need to know a few functions to use the Python hashlib module.

  • You can hash the entire message at once by using the hashlib.encryption_algorithm_name(b”message”) function.
  • Additionally, the update() function can be used to append a byte message to the secure hash value. The output will be the same in both cases. Finally, the secure hash can be obtained by using the digest() function.
  • It’s worth noting that b is written to the left of the message to be hashed. This b indicates that the string is a byte string.

hashlib.shake_256() Function:

We can convert a normal string in byte format to an encrypted form using the hashlib.shake_256 method. Passwords and important files can be hashed to secure them using the hashlib.shake_256 method.
NOTE: Please keep in mind that we can change the length of the encrypted data.

Syntax:

hashlib.shake_256()

Return Value:

The hash code for the string given is returned by the shake_256() function.

hashlib.shake_256() Function in Python

Method #1: Using shake_256 Function (Static Input)

Here, we encrypt the byte string or passwords to secure them using the hashlib.shake_256() function.

Approach:

  • Import hashlib module using the import keyword
  • Create a reference/Instance variable(Object) for the hashlib module and call shake_256() function and store it in a variable
  • Give the string as static input(here b represents byte string) and store it in another variable.
  • Call the update() function using the above-created object by passing the above-given string as an argument to it
  • Here it converts the given string in byte format to an encrypted form.
  • Get the secure hash using the digest() function.
  • The Exit of the Program.

Below is the implementation:

# Import hashlib module using the import keyword
import hashlib

# Creating a reference/Instance variable(Object) for the hashlib module and 
# call shake_256() function and store it in a variable
obj = hashlib.shake_256()

# Give the string as static input(here b represents byte string) and store it in another variable.
gvn_str = b'Python-programs'

# Call the update() function using the above created object by passing the above given string as 
# an argument to it
# Here it converts the given string in byte format to an encrypted form.
obj.update(gvn_str)
# Get the secure hash using the digest() function.
print(obj.digest(12))

Output:

b'\x80I\x82I^\xc4\xf6\xe3\x07"d,'

Method #2: Using shake_256 Function (User Input)

Approach:

  • Import hashlib module using the import keyword
  • Create a reference/Instance variable(Object) for the hashlib module and call shake_256() function and store it in a variable
  • Give the string as user input using the input() function and store it in another variable.
  • Convert the given string into a byte string using the bytes() function by passing the given string, ‘utf-8’ as arguments to it.
  • Call the update() function using the above-created object by passing the above-given string as an argument to it
  • Here it converts the given string in byte format to an encrypted form.
  • Get the secure hash using the digest() function.
  • The Exit of the Program.

Below is the implementation:

# Import hashlib module using the import keyword
import hashlib

# Creating a reference/Instance variable(Object) for the hashlib module and 
# call shake_256() function and store it in a variable
obj = hashlib.shake_256()

# Give the string as user input using the input() function and store it in another variable.
gvn_str = input("Enter some random string = ")
# Convert the given string into byte string using the bytes() function by passing given string, 
# 'utf-8' as arguments to it 
gvn_str=bytes(gvn_str, 'utf-8')

# Call the update() function using the above created object by passing the above given string as 
# an argument to it
# Here it converts the given string in byte format to an encrypted form.
obj.update(gvn_str)
# Get the secure hash using the digest() function.
print(obj.digest(14))

Output:

Enter some random string = welcome to Python-programs
b'\x83\xaf^\rQ\xbc\n\x84+\xfcy\xfa8`'

Python hashlib.shake_256() Function Read More »

Python nltk.tokenize.SpaceTokenizer() Function

NLTK in Python:

NLTK is a Python toolkit for working with natural language processing (NLP). It provides us with a large number of test datasets for various text processing libraries. NLTK can be used to perform a variety of tasks such as tokenizing, parse tree visualization, and so on.

Tokenization

Tokenization is the process of dividing a large amount of text into smaller pieces/parts known as tokens. These tokens are extremely valuable for detecting patterns and are regarded as the first stage in stemming and lemmatization. Tokenization also aids in the replacement of sensitive data elements with non-sensitive data elements.

Natural language processing is utilized in the development of applications such as text classification, intelligent chatbots, sentiment analysis, language translation, and so on. To attain the above target, it is essential to consider the pattern in the text.

Natural Language Toolkit features an important module called NLTK tokenize sentences, which is further divided into sub-modules.

  • word tokenize
  • sentence tokenize

nltk.tokenize.SpaceTokenizer() Function:

Using SpaceTokenizer() Function of the tokenize of nltk module we can extract tokens from a string of words based on the space between them.

Syntax:

tokenize.SpaceTokenizer()

Parameters: This method doesn’t accept any parameters

Return Value:

The tokens of words are returned by the SpaceTokenizer() Function

nltk.tokenize.SpaceTokenizer() Function in Python

Method #1: Using SpaceTokenizer() Function (Static Input)

Here, we extract the tokens from the stream to words that have space between them.

Approach:

  • Import SpaceTokenizer() function from tokenize of nltk module using the import keyword
  • Creating a reference/Instance variable(Object) for the SpaceTokenizer Class
  • Give the string as static input and store it in a variable.
  • Pass the above-given string as an argument to extract tokens from a given string of words based on the space between them.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import SpaceTokenizer() function from tokenize of nltk module using the import keyword
from nltk.tokenize import SpaceTokenizer
    
# Creating a reference/Instance variable(Object) for the SpaceTokenizer Class
tkn = SpaceTokenizer()
    
# Give the string as static input and store it in a variable.
gvn_str = "hello python-programs.. @@&* \nwelcome\t good morning"

# Pass the above given string as an argument to extract tokens from a string of words 
# based on the space between them.
rslt = tkn.tokenize(gvn_str)

# Print the above result 
print(rslt)

Output:

['hello', 'python-programs..', '@@&*', '\nwelcome\t', 'good', 'morning']

Method #2: Using SpaceTokenizer() Function (User Input)

Approach:

  • Import SpaceTokenizer() function from tokenize of nltk module using the import keyword
  • Creating a reference/Instance variable(Object) for the SpaceTokenizer Class
  • Give the string as user input using the input() function and store it in a variable.
  • Pass the above-given string as an argument to extract tokens from a given string of words based on the space between them.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import SpaceTokenizer() function from tokenize of nltk module using the import keyword
from nltk.tokenize import SpaceTokenizer
    
# Creating a reference/Instance variable(Object) for the SpaceTokenizer Class
tkn = SpaceTokenizer()
    
# Give the string as user input using the input() function and store it in a variable.
gvn_str = input("Enter some random string = ")

# Pass the above given string as an argument to the extract tokens from a string of words 
# based on the space between them.
rslt = tkn.tokenize(gvn_str)

# Print the above result 
print(rslt)

Output:

Enter some random string = good morning this is python-programs
['good', 'morning', 'this', 'is', 'python-programs']

 

Python nltk.tokenize.SpaceTokenizer() Function Read More »

Python nltk.tokenize.TabTokenizer() Function

NLTK in Python:

NLTK is a Python toolkit for working with natural language processing (NLP). It provides us with a large number of test datasets for various text processing libraries. NLTK can be used to perform a variety of tasks such as tokenizing, parse tree visualization, and so on.

Tokenization

Tokenization is the process of dividing a large amount of text into smaller pieces/parts known as tokens. These tokens are extremely valuable for detecting patterns and are regarded as the first stage in stemming and lemmatization. Tokenization also aids in the replacement of sensitive data elements with non-sensitive data elements.

Natural language processing is utilized in the development of applications such as text classification, intelligent chatbots, sentiment analysis, language translation, and so on. To attain the above target, it is essential to consider the pattern in the text.

Natural Language Toolkit features an important module called NLTK tokenize sentences, which is further divided into sub-modules.

  • word tokenize
  • sentence tokenize

nltk.tokenize.TabTokenizer() Function:

Using TabTokenizer() Function of the tokenize of nltk module we can extract tokens from a string of words based on the tabs between them.

Syntax:

tokenize.TabTokenizer()

Parameters: This method doesn’t accept any parameters

Return Value:

The tokens of words based on the tabs are returned by the TabTokenizer() Function

Examples:

Example1:

Input:

Given string = "hello python-programs\t@@&* \nwelcome\tgood morning"

Output:

['hello python-programs', '@@&* \nwelcome', 'good morning']

Example2:

Input:

Given string = "welcome\t to python-programs\t hi\tall"

Output:

['welcome', ' to python-programs', ' hi', 'all']

nltk.tokenize.TabTokenizer() Function in Python

Example1

Approach:

  • Import TabTokenizer() function from tokenize of nltk module using the import keyword
  • Creating a reference/Instance variable(Object) for the TabTokenizer Class
  • Give the string as static input and store it in a variable.
  • Pass the above-given string as an argument to extract tokens from a string of words based on the tabs between them and store it in another variable.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import TabTokenizer() function from tokenize of nltk module using the import keyword
from nltk.tokenize import TabTokenizer
    
# Creating a reference/Instance variable(Object) for the TabTokenizer Class
tkn = TabTokenizer()
    
# Give the string as static input and store it in a variable.
gvn_str = "hello python-programs\t@@&* \nwelcome\tgood morning"

# Pass the above given string as an argument to extract tokens from a string of words 
# based on the tabs between them.
rslt = tkn.tokenize(gvn_str)

# Print the above result 
print(rslt)

Output:

['hello python-programs', '@@&* \nwelcome', 'good morning']

Example2

# Import TabTokenizer() function from tokenize of nltk module using the import keyword
from nltk.tokenize import TabTokenizer
    
# Creating a reference/Instance variable(Object) for the TabTokenizer Class
tkn = TabTokenizer()
    
# Give the string as static input and store it in a variable.
gvn_str = "welcome\t to python-programs\t hi\tall"

# Pass the above given string as an argument to extract tokens from a string of words 
# based on the tabs between them and store it in another variable.
rslt = tkn.tokenize(gvn_str)

# Print the above result 
print(rslt)

Output:

['welcome', ' to python-programs', ' hi', 'all']

 

Python nltk.tokenize.TabTokenizer() Function Read More »

Python nltk.WhitespaceTokenizer() Function

NLTK in Python:

NLTK is a Python toolkit for working with natural language processing (NLP). It provides us with a large number of test datasets for various text processing libraries. NLTK can be used to perform a variety of tasks such as tokenizing, parse tree visualization, and so on.

Tokenization

Tokenization is the process of dividing a large amount of text into smaller pieces/parts known as tokens. These tokens are extremely valuable for detecting patterns and are regarded as the first stage in stemming and lemmatization. Tokenization also aids in the replacement of sensitive data elements with non-sensitive data elements.

Natural language processing is utilized in the development of applications such as text classification, intelligent chatbots, sentiment analysis, language translation, and so on. To attain the above target, it is essential to consider the pattern in the text.

Natural Language Toolkit features an important module called NLTK tokenize sentences, which is further divided into sub-modules.

  • word tokenize
  • sentence tokenize

nltk.tokenize.WhitespaceTokenizer() Function:

Using WhitespaceTokenizer() Function of the tokenize of nltk module we can extract the tokens from a string of words/sentences without whitespaces, new lines, and tabs

Syntax:

tokenize.WhitespaceTokenizer()

Parameters: This method doesn’t accept any parameters

Return Value:

The tokens of words without whitespaces, new lines, and tabs from a string are returned by the WhitespaceTokenizer() Function

Examples:

Example1:

Input:

Given String = "hello python-programs.. @@&* \n welcome\t good morning"

Output:

['hello', 'python-programs..', '@@&*', 'welcome', 'good', 'morning']

Example2:

Input:

Given String =  "welcome to\t the\n world \nof python"

Output:

['welcome', 'to', 'the', 'world', 'of', 'python']

nltk.tokenize.WhitespaceTokenizer() Function in Python

Method #1: Using WhitespaceTokenizer() Function (Static Input)

Here, we extract the tokens from the stream to words that have space between them.

Approach:

  • Import WhitespaceTokenizer() function from tokenize of nltk module using the import keyword
  • Creating a reference/Instance variable(Object) for the WhitespaceTokenizer Class
  • Give the string as static input and store it in a variable.
  • Pass the above-given string as an argument to extract tokens from a string of words/sentences without whitespaces, new lines, and tabs and store it in another variable
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import WhitespaceTokenizer() function from tokenize of nltk module using the import keyword
from nltk.tokenize import WhitespaceTokenizer
    
# Creating a reference/Instance variable(Object) for the WhitespaceTokenizer Class
tkn = WhitespaceTokenizer()
    
# Give the string as static input and store it in a variable.
gvn_str = "hello python-programs..  @@&* \n welcome\t good morning"

# Pass the above given string as an argument to extract tokens from a string of words/sentences 
# without whitespaces, new lines, and tabs and store it in another variable
rslt = tkn.tokenize(gvn_str)

# Print the above result 
print(rslt)

Output:

['hello', 'python-programs..', '@@&*', 'welcome', 'good', 'morning']

Method #2: Using WhitespaceTokenizer() Function (User Input)

Approach:

  • Import WhitespaceTokenizer() function from tokenize of nltk module using the import keyword
  • Creating a reference/Instance variable(Object) for the WhitespaceTokenizer Class
  • Give the string as static input and store it in a variable.
  • Pass the above-given string as an argument to extract tokens from a string of words/sentences without whitespaces, new lines, and tabs and store it in another variable
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import WhitespaceTokenizer() function from tokenize of nltk module using the import keyword
from nltk.tokenize import WhitespaceTokenizer
    
# Creating a reference/Instance variable(Object) for the WhitespaceTokenizer Class
tkn = WhitespaceTokenizer()
    
# Give the string as user input using the input() function and store it in a variable.
gvn_str = input("Enter some random string = ")

# Pass the above given string as an argument to extract tokens from a string of words/sentences 
# without whitespaces, new lines, and tabs and store it in another variable
rslt = tkn.tokenize(gvn_str)

# Print the above result 
print(rslt)

Output:

Enter some random string = good morning python programs
['good', 'morning', 'python', 'programs']

 

Python nltk.WhitespaceTokenizer() Function Read More »

Python NLTK tokenize.regexp() Function

NLTK in Python:

NLTK is a Python toolkit for working with natural language processing (NLP). It provides us with a large number of test datasets for various text processing libraries. NLTK can be used to perform a variety of tasks such as tokenizing, parse tree visualization, and so on.

Tokenization

Tokenization is the process of dividing a large amount of text into smaller pieces/parts known as tokens. These tokens are extremely valuable for detecting patterns and are regarded as the first stage in stemming and lemmatization. Tokenization also aids in the replacement of sensitive data elements with non-sensitive data elements.

Natural language processing is utilized in the development of applications such as text classification, intelligent chatbots, sentiment analysis, language translation, and so on. To attain the above target, it is essential to consider the pattern in the text.

Natural Language Toolkit features an important module called NLTK tokenize sentences, which is further divided into sub-modules.

  • word tokenize
  • sentence tokenize

tokenize.regexp() Function:

We can extract tokens from strings using regular expressions with the tokenize.regexp() function of python.

Syntax:

tokenize.regexp()

Parameters: This method doesn’t accept any parameters

Return Value:

The array of tokens using regular expression is returned by the tokenize.regexp() Function

NLTK tokenize.regexp() Function in Python

Method #1: Using regexp() Function (Static Input)

Approach:

  • Import RegexpTokenizer() method from tokenize of nltk module using the import keyword
  • Create a reference/Instance variable(Object) for the RegexpTokenizer Class by passing gaps as True as an argument to it.
  • Give the string as static input and store it in a variable.
  • Pass the above-given string to the tokenize() function to extract tokens from the given string using regular expressions
  • Store it in another variable.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import RegexpTokenizer() method from tokenize of nltk module using the import keyword
from nltk.tokenize import RegexpTokenizer
    
# Creating a reference/Instance variable(Object) for the RegexpTokenizer Class by passing 
# gaps as True.
tkn = RegexpTokenizer('\s+', gaps = True)
    
# Give the string as static input and store it in a variable.
gvn_str = "Hello this is Python-programs"
    
# Pass the above given string to the tokenize() function to extract tokens from the 
# given string using regular expressions
# Store it in another variable.
rslt = tkn.tokenize(gvn_str)

# Print the above result
print(rslt)

Output:

['Hello', 'this', 'is', 'Python-programs']

Method #2: Using regexp() Function (User Input)

Approach:

  • Import RegexpTokenizer() method from tokenize of nltk module using the import keyword
  • Create a reference/Instance variable(Object) for the RegexpTokenizer Class by passing gaps as True as an argument to it.
  • Give the string as user input using the input() function and store it in a variable.
  • Pass the above-given string to the tokenize() function to extract tokens from the given string using regular expressions
  • Store it in another variable.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import RegexpTokenizer() method from tokenize of nltk module using the import keyword
from nltk.tokenize import RegexpTokenizer
    
# Creating a reference/Instance variable(Object) for the RegexpTokenizer Class by passing 
# gaps as True as an argument to it.
tkn = RegexpTokenizer('\s+', gaps = True)
    
# Give the string as user input using the input() function and store it in a variable.
gvn_str = input("Enter some random string = ")
    
# Pass the above given string to the tokenize() function to extract tokens from the 
# given string using regular expressions
# Store it in another variable.
rslt = tkn.tokenize(gvn_str)

# Print the above result
print(rslt)

Output:

Enter some random string = welcome to Python programs
['welcome', 'to', 'Python', 'programs']

 

Python NLTK tokenize.regexp() Function Read More »

Python NLTK nltk.TweetTokenizer() Function

NLTK in Python:

NLTK is a Python toolkit for working with natural language processing (NLP). It provides us with a large number of test datasets for various text processing libraries. NLTK can be used to perform a variety of tasks such as tokenizing, parse tree visualization, and so on.

Tokenization

Tokenization is the process of dividing a large amount of text into smaller pieces/parts known as tokens. These tokens are extremely valuable for detecting patterns and are regarded as the first stage in stemming and lemmatization. Tokenization also aids in the replacement of sensitive data elements with non-sensitive data elements.

Natural language processing is utilized in the development of applications such as text classification, intelligent chatbots, sentiment analysis, language translation, and so on. To attain the above target, it is essential to consider the pattern in the text.

Natural Language Toolkit features an important module called NLTK tokenize sentences, which is further divided into sub-modules.

  • word tokenize
  • sentence tokenize

nltk.TweetTokenizer() Function:

We can convert the stream of words into small tokens so that we can analyze the audio stream with the help of nltk.TweetTokenizer() method.

Syntax:

nltk.TweetTokenizer()

Return Value:

The stream of tokens is returned by the TweetTokenizer() function

NLTK nltk.TweetTokenizer() Function in Python

Method #1: Using TweetTokenizer() Function (Static Input)

Here, when we pass an audio stream in the form of a string,  it is converted to small tokens from a large string using the TweetTokenizer() method.

Approach:

  • Import TweetTokenizer() method from tokenize of nltk module using the import keyword
  • Create a reference/Instance variable(Object) for the TweetTokenizer Class and store it in a variable
  • Give the string as static input and store it in a variable.
  • Pass the above-given string to the tokenize() function to convert it to small tokens from a given large string using the TweetTokenizer() method.
  • Store it in another variable.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import TweetTokenizer() method from tokenize of nltk module using the import keyword
from nltk.tokenize import TweetTokenizer

# Creating a reference/Instance variable(Object) for the TweetTokenizer Class and 
# store it in a variable
tkn = TweetTokenizer()

# Give the string as static input and store it in a variable.
gvn_str = "Hello this is Python-programs"

# Pass the above given string to the tokenize() function to convert it to small tokens from a 
# given large string using the TweetTokenizer() method.
# Store it in another variable.
rslt = tkn.tokenize(gvn_str)

# Print the above result
print(rslt)

Output:

['Hello', 'this', 'is', 'Python-programs']

Method #2: Using TweetTokenizer() Function (User Input)

Approach:

  • Import TweetTokenizer() method from tokenize of nltk module using the import keyword
  • Create a reference/Instance variable(Object) for the TweetTokenizer Class and store it in a variable
  • Give the string as user input using the input() function and store it in a variable.
  • Pass the above-given string to the tokenize() function to convert it to small tokens from a given large string using the TweetTokenizer() method.
  • Store it in another variable.
  • Print the above result.
  • The Exit of the Program.

Below is the implementation:

# Import TweetTokenizer() method from tokenize of nltk module using the import keyword
from nltk.tokenize import TweetTokenizer

# Creating a reference/Instance variable(Object) for the TweetTokenizer Class and 
# store it in a variable
tkn = TweetTokenizer()

# Give the string as user input using the input() function and store it in a variable.
gvn_str = input("Enter some random string = ")

# Pass the above given string to the tokenize() function to convert it to small tokens from a 
# large given string using the TweetTokenizer() method.
# Store it in another variable.
rslt = tkn.tokenize(gvn_str)

# Print the above result
print(rslt)

Output:

Enter some random string = : %:- <> () a {} [] :-
[':', '%', ':', '-', '<', '>', '(', ')', 'a', '{', '}', '[', ']', ':', '-']

 

Python NLTK nltk.TweetTokenizer() Function Read More »

Python Scipy stats.halfgennorm.logpdf() Function

Scipy Library in Python:

  • SciPy is a scientific computation package that uses the NumPy library underneath.
  • SciPy is an abbreviation for Scientific Python.
  • It includes additional utility functions for optimization, statistics, and signal processing.
  • SciPy, like NumPy, is open source, so we can freely use it.
  • Travis Olliphant, the developer of NumPy, created SciPy.
  • SciPy has optimized and added/enhanced functions that are often used in NumPy and Data Science.

stats.halfgennorm.logpdf() Function:

We can obtain the log value of the probability density function by using the stats.halfgennorm.logpdf() method.

The formula of probability density function for halfgennorm:

Formula of probability density function for halfgennorm

Syntax:

stats.halfgennorm.logpdf(x, beta)

Return Value:

The log value of the probability density function is returned by the stats.halfgennorm.logpdf() Function.

Scipy stats.halfgennorm.logpdf() Function in Python

Method #1: Using logpdf() Function (Static Input)

Approach:

  • Import halfgennorm() method from stats of scipy module using the import keyword
  • Give the beta value as static input and store it in a variable.
  • Calculate the log value of probability density function using the logpdf() function of halfgennorm by passing some random value(x), given beta value as arguments to it.
  • Store it in another variable.
  • Print the log value of the probability density function for the given beta value.
  • The Exit of the Program.

Below is the implementation:

# Import halfgennorm() method from stats of scipy module using the import keyword
from scipy.stats import halfgennorm

# Give the beta value as static input and store it in a variable.
gvn_beta = 3

# Calculate the log value of probability density function using the logpdf() function
# of halfgennorm by passing some random value(x), given beta value as arguments to it.
# Store it in another variable.
rslt = halfgennorm.logpdf(0.2, gvn_beta)

# Print the log value of probability density function for the given beta value
print("The log value of probability density function for the given beta {", gvn_beta,"} value = ", rslt)

Output:

The log value of probability density function for the given beta { 3 } value = 0.10519164174034268

Method #2: Using logpdf() Function (User Input)

Approach:

  • Import halfgennorm() method from stats of scipy module using the import keyword
  • Give the beta value as user input using the int(input()) function and store it in a variable.
  • Calculate the log value of probability density function using the logpdf() function of halfgennorm by passing some random value(x), given beta value as arguments to it.
  • Store it in another variable.
  • Print the log value of the probability density function for the given beta value.
  • The Exit of the Program.

Below is the implementation:

# Import halfgennorm() method from stats of scipy module using the import keyword
from scipy.stats import halfgennorm

# Give the beta value as user input using the int(input()) function and store it in a variable.
gvn_beta = int(input("Enter some random number = "))

# Calculate the log value of probability density function using the logpdf() function
# of halfgennorm by passing some random value(x), given beta value as arguments to it.
# Store it in another variable.
rslt = halfgennorm.logpdf(0.3, gvn_beta)

# Print the log value of probability density function for the given beta value
print("The log value of probability density function for the given beta {", gvn_beta,"} value = ", rslt)

Output:

Enter some random number = 6
The log value of probability density function for the given beta { 6 } value = 0.07429703414981458

Python Scipy stats.halfgennorm.logpdf() Function Read More »

Python Scipy stats.halfgennorm.pdf() Function

Scipy Library in Python:

  • SciPy is a scientific computation package that uses the NumPy library underneath.
  • SciPy is an abbreviation for Scientific Python.
  • It includes additional utility functions for optimization, statistics, and signal processing.
  • SciPy, like NumPy, is open source, so we can freely use it.
  • Travis Olliphant, the developer of NumPy, created SciPy.
  • SciPy has optimized and added/enhanced functions that are often used in NumPy and Data Science.

stats.halfgennorm.pdf() Function:

We can obtain the value of the probability density function by using the stats.halfgennorm.pdf() method.

The formula of probability density function for halfgennorm:

Formula of probability density function for halfgennorm

Syntax:

stats.halfgennorm.pdf(x, beta)

Return Value:

The probability density value is returned by the stats.halfgennorm.pdf() Function.

Scipy stats.halfgennorm.pdf() Function in Python

Method #1: Using pdf() Function (Static Input)

Approach:

  •  Import halfgennorm() method from stats of scipy module using the import keyword
  • Give the beta value as static input and store it in a variable.
  • Calculate the probability density value using the pdf() function of halfgennorm by passing some random value(x), given beta value as arguments to it.
  • Store it in another variable.
  • Print the probability density value for the given beta value.
  • The Exit of the Program.

Below is the implementation:

# Import halfgennorm() method from stats of scipy module using the import keyword
from scipy.stats import halfgennorm

# Give the beta value as static input and store it in a variable.
gvn_beta = 2

# Calculate the probability density value using the pdf() function of halfgennorm by passing
# some random value(x), given beta value as arguments to it.
# Store it in another variable.
rslt = halfgennorm.pdf(0.2, gvn_beta)

# Print the probability density value for the given beta value
print("The probability density value for the given beta {", gvn_beta,"} value = ", rslt)

Output:

The probability density value for the given beta { 2 } value = 1.0841347871048632

Method #2: Using pdf() Function (User Input)

Approach:

  •  Import halfgennorm() method from stats of scipy module using the import keyword
  • Give the beta value as user input using the int(input()) function and store it in a variable.
  • Calculate the probability density value using the pdf() function of halfgennorm by passing some random value(x), given beta value as arguments to it.
  • Store it in another variable.
  • Print the probability density value for the given beta value.
  • The Exit of the Program.

Below is the implementation:

# Import halfgennorm() method from stats of scipy module using the import keyword
from scipy.stats import halfgennorm

# Give the beta value as user input using the int(input()) function and store it in a variable.
gvn_beta = int(input("Enter some random number = "))

# Calculate the probability density value using the pdf() function of halfgennorm by passing
# some random value(x), given beta value as arguments to it.
# Store it in another variable.
rslt = halfgennorm.pdf(0.5, gvn_beta)

# Print the probability density value for the given beta value
print("The probability density value for the given beta {", gvn_beta,"} value = ", rslt)

Output:

Enter some random number = 6
The probability density value for the given beta { 6 } value = 1.0612007331497644

Python Scipy stats.halfgennorm.pdf() Function Read More »

Python Scipy stats.halfgennorm.rvs() Function

Scipy Library in Python:

  • SciPy is a scientific computation package that uses the NumPy library underneath.
  • SciPy is an abbreviation for Scientific Python.
  • It includes additional utility functions for optimization, statistics, and signal processing.
  • SciPy, like NumPy, is open source, so we can freely use it.
  • Travis Olliphant, the developer of NumPy, created SciPy.
  • SciPy has optimized and added/enhanced functions that are often used in NumPy and Data Science.

stats.halfgennorm.rvs() Function:

We can generate a random variate from a generalized normal distribution using the stats.halfgennorm.rvs() function.

Syntax:

stats.halfgennorm.rvs(beta)

Return Value:

A random variate value is returned by the stats.halfgennorm.rvs() Function.

Examples:

Example1:

Input:

Given Beta = 4

Output:

The random variate value for the given beta { 4 } value = 0.2221758090994274

Example2:

Input:

Given Beta = 2

Output:

The random variate value for the given beta { 2 } value = 0.9441734155526698

Scipy stats.halfgennorm.rvs() Function in Python

Method #1: Using rvs() Function (Static Input)

Approach:

  • Import halfgennorm() method from stats of scipy module using the import keyword
  • Give the beta value as static input and store it in a variable.
  • Pass the given beta value as an argument to the rvs() function of halfgennorm to generate a random variate value from a generalized normal distribution.
  • Store it in another variable.
  • Print the random variate value for the given beta value.
  • The Exit of the Program.

Below is the implementation:

# Import halfgennorm() method from stats of scipy module using the import keyword
from scipy.stats import halfgennorm

# Give the beta value as static input and store it in a variable.
gvn_beta = 2

# Pass the given beta value as an argument to the rvs() function of halfgennorm to
# generate a random variate value from a generalized normal distribution.
# Store it in another variable.
rslt = halfgennorm.rvs(gvn_beta)

# Print the random variate value for the given beta value
print("The random variate value for the given beta {", gvn_beta,"} value = ", rslt)

Output:

The random variate value for the given beta { 2 } value = 0.24472017026361062

Method #2: Using rvs() Function (User Input)

Approach:

  • Import halfgennorm() method from stats of scipy module using the import keyword
  • Give the beta value as user input using the int(input()) function and store it in a variable.
  • Pass the given beta value as an argument to the rvs() function of halfgennorm to generate a random variate value from a generalized normal distribution.
  • Store it in another variable.
  • Print the random variate value for the given beta value.
  • The Exit of the Program.

Below is the implementation:

# Import halfgennorm() method from stats of scipy module using the import keyword
from scipy.stats import halfgennorm

# Give the beta value as user input using the int(input()) function and store it in a variable.
gvn_beta = int(input("Enter some random number = "))

# Pass the given beta value as an argument to the rvs() function of halfgennorm to
# generate a random variate value from a generalized normal distribution.
# Store it in another variable.
rslt = halfgennorm.rvs(gvn_beta)

# Print the random variate value for the given beta value
print("The random variate value for the given beta {", gvn_beta,"} value = ", rslt)

Output:

Enter some random number = 4
The random variate value for the given beta { 4 } value = 0.8736129067330833

 

 

 

Python Scipy stats.halfgennorm.rvs() Function Read More »

Program to Create a Linked List & Display the Elements in the List in Python

Linked List Data Structure:

A linked list is a type of data structure that consists of a chain of nodes, each of which contains a value and a pointer to the next node in the chain.

The list’s head pointer points to the first node, and the list’s last element points to null. The head pointer points to null when the list is empty.

Linked lists can grow in size dynamically, and inserting and deleting elements from them is simple because, unlike arrays, we only need to change the pointers to the previous and next elements to insert or delete an element.

Linked lists are commonly used in the construction of file systems, adjacency lists, and hash tables.

Python Program to Create a Linked List & Display the Elements in the List

Approach:

  • Create a class that creates a node of the LinkedList.
  • Create a constructor that accepts val as an argument and initializes the class variable val with the given argument val.
  • Initializing the next Pointer(Last Node Pointer to None i.e Null).
  • Create a Class Which Creates the LinkedList by connecting all the nodes.
  • Inside the class, the constructor Initialize the head pointer to None(Null) and the last Node pointer to Null.
  • Create a function addElements() inside the class which accepts the data value as an argument and add this node to the LinkedList.
  • Check if the lastNode Pointer is None using the if conditional statement.
  • If the above if condition is true(If there are no elements in the linked list) then Create a Node using the Node class by passing the data as an argument.
  • Initialize the head with the above node.
  • Else Create the new node and initialize the last node pointer Value to the new node.
  • Set the last node Pointer Value to the Next Pointer.
  • Create a function DisplayElements inside the class which prints all the elements of the LinkedList.
  • Take a Pointer that points to the nodes of the LinkedList and initialize its value to the Head Pointer.
  • Loop till the temPtr is None(null) using the while loop.
  • Print the value at the Pointer Node.
  • Increment the temPtr to the Next Node.
  • Take an object for the LinkedList Class and store it in a variable.
  • Add elements to the LinkedList by using the below steps.
  • Loop till the above number of elements using the For loop.
  • Give the data value as user input and store it in a variable.
  • Pass the above value as an argument to the addElements() function to add this value as a node to the LinkedList.
  • Printing all the elements/Node values of the LinkedList by calling the displayElements() of the above object.
  • The Exit of the Program.

Below is the Implementation:

#Create a class that creates a node of the LinkedList.
class Node:
    #Create a constructor that accepts val as an argument and initializes the class variable val with the given argument val.
    def __init__(self, val):
       self.val = val
       #Initializing the next Pointer(Last Node Pointer to None i.e Null).
       self.nextPtr = None
 #Create a Class Which Creates the LinkedList by connecting all the nodes.
class LinkedList:
  #Inside the class, the constructor Initialize the head pointer to None(Null) and the last Node pointer to Null.
    def __init__(self):
        self.headPtr = None
        self.lastNode = None
    #Create a function addElements() inside the class which accepts the data value as an argument and add this node to the LinkedList.
    def addElements(self, val):
        #Check if the lastNode Pointer is None using the if conditional statement.
        if self.lastNode is None:
          #If the above if condition is true(If there are no elements in the linked list)
          # then Create a Node using the Node class by passing the data as an argument.
          #Initialize the head with the above node.
            self.headPtr = Node(val)
            self.lastNode = self.headPtr
        else:
          #Else Create the new node and initialize the last node pointer Value to the new node.
            self.lastNode.nextPtr = Node(val)
            #Set the last node Pointer Value to the Next Pointer.
            self.lastNode = self.lastNode.nextPtr
    #Create a function DisplayElements inside the class which prints all the elements of the LinkedList.
    def displayElements(self):
      #Take a Pointer that points to the nodes of the LinkedList and initialize its value to the Head Pointer.
        temPtr = self.headPtr
        #Loop till the temPtr is None(null) using the while loop.
        while temPtr is not None:
          #Print the value at the Pointer Node
            print(temPtr.val, end = ' ')
            #Increment the temPtr to the Next Node.
            temPtr = temPtr.nextPtr
#Take an object for the LinkedList Class and store it in a variable.
lkdList = LinkedList()
#Add elements to the LinkedList by using the below steps.
n = int(input('Enter the number of elements you wish to add in the linked list = '))
#Loop till the above number of elements using the For loop.
for i in range(n):
    #Give the data value as user input and store it in a variable.
    val = int(input('Enter data item: '))
    #Pass the above value as an argument to the addElements() function to add this value as a node to the LinkedList.
    lkdList.addElements(val)
#Printing all the elements/Node values of the LinkedList by calling the displayElements() of the above object.
print('The Elements of the Linked List are: ')
lkdList.displayElements()

Output:

Enter the number of elements you wish to add in the linked list = 4
Enter data item: 3
Enter data item: 8
Enter data item: 2
Enter data item: 4
The Elements of the Linked List are: 
3 8 2 4

Program to Create a Linked List & Display the Elements in the List in Python Read More »